In ansible, how to synchronize 2 folders on the same remote machine? - rsync

I have the following simple task:
copying everything in folder A to folder B. Since I have many hosts in a group, I use the following yaml task definition:
- name: Sync /etc/spark/conf to $SPARK_HOME/conf
synchronize: src=/etc/spark/conf dest={{spark_home}}/conf
delegate_to: "{{item}}"
with_items: "{{play_hosts}}"
tags: spark
However, running ansible-playbook gave me the following error:
TASK [cloudera : Sync /etc/spark/conf to $SPARK_HOME/conf] *********************
failed: [52.53.220.119 -> 52.53.200.0] (item=52.53.200.0) => {"cmd": "/usr/bin/rsync --delay-updates -F --compress --archive --rsh 'ssh -i /home/peng/.ssh/saphana.pem -S none -o StrictHostKeyChecking=no' --rsync-path=\"sudo rsync\" --out-format='<<CHANGED>>%i %n%L' \"/etc/spark/conf\" \"52.53.220.119:/opt/spark/spark-1.6.2-bin-hadoop2.4/conf\"", "failed": true, "item": "52.53.200.0", "msg": "Warning: Identity file /home/peng/.ssh/saphana.pem not accessible: No such file or directory.\nPermission denied (publickey).\r\nrsync: connection unexpectedly closed (0 bytes received so far) [sender]\nrsync error: unexplained error (code 255) at io.c(226) [sender=3.1.0]\n", "rc": 255}
failed: [52.53.200.193 -> 52.53.200.0] (item=52.53.200.0) => {"cmd": "/usr/bin/rsync --delay-updates -F --compress --archive --rsh 'ssh -i /home/peng/.ssh/saphana.pem -S none -o StrictHostKeyChecking=no' --rsync-path=\"sudo rsync\" --out-format='<<CHANGED>>%i %n%L' \"/etc/spark/conf\" \"52.53.200.193:/opt/spark/spark-1.6.2-bin-hadoop2.4/conf\"", "failed": true, "item": "52.53.200.0", "msg": "Warning: Identity file /home/peng/.ssh/saphana.pem not accessible: No such file or directory.\nPermission denied (publickey).\r\nrsync: connection unexpectedly closed (0 bytes received so far) [sender]\nrsync error: unexplained error (code 255) at io.c(226) [sender=3.1.0]\n", "rc": 255}
ok: [52.53.200.0 -> 52.53.200.0] => (item=52.53.200.0)
ok: [52.53.220.119 -> 52.53.220.119] => (item=52.53.220.119)
failed: [52.53.200.193 -> 52.53.220.119] (item=52.53.220.119) => {"cmd": "/usr/bin/rsync --delay-updates -F --compress --archive --rsh 'ssh -i /home/peng/.ssh/saphana.pem -S none -o StrictHostKeyChecking=no' --rsync-path=\"sudo rsync\" --out-format='<<CHANGED>>%i %n%L' \"/etc/spark/conf\" \"52.53.200.193:/opt/spark/spark-1.6.2-bin-hadoop2.4/conf\"", "failed": true, "item": "52.53.220.119", "msg": "Warning: Identity file /home/peng/.ssh/saphana.pem not accessible: No such file or directory.\nPermission denied (publickey).\r\nrsync: connection unexpectedly closed (0 bytes received so far) [sender]\nrsync error: unexplained error (code 255) at io.c(226) [sender=3.1.0]\n", "rc": 255}
failed: [52.53.200.0 -> 52.53.220.119] (item=52.53.220.119) => {"cmd": "/usr/bin/rsync --delay-updates -F --compress --archive --rsh 'ssh -i /home/peng/.ssh/saphana.pem -S none -o StrictHostKeyChecking=no' --rsync-path=\"sudo rsync\" --out-format='<<CHANGED>>%i %n%L' \"/etc/spark/conf\" \"52.53.200.0:/opt/spark/spark-1.6.2-bin-hadoop2.4/conf\"", "failed": true, "item": "52.53.220.119", "msg": "Warning: Identity file /home/peng/.ssh/saphana.pem not accessible: No such file or directory.\nPermission denied (publickey).\r\nrsync: connection unexpectedly closed (0 bytes received so far) [sender]\nrsync error: unexplained error (code 255) at io.c(226) [sender=3.1.0]\n", "rc": 255}
ok: [52.53.200.193 -> 52.53.200.193] => (item=52.53.200.193)
failed: [52.53.220.119 -> 52.53.200.193] (item=52.53.200.193) => {"cmd": "/usr/bin/rsync --delay-updates -F --compress --archive --rsh 'ssh -i /home/peng/.ssh/saphana.pem -S none -o StrictHostKeyChecking=no' --rsync-path=\"sudo rsync\" --out-format='<<CHANGED>>%i %n%L' \"/etc/spark/conf\" \"52.53.220.119:/opt/spark/spark-1.6.2-bin-hadoop2.4/conf\"", "failed": true, "item": "52.53.200.193", "msg": "Warning: Identity file /home/peng/.ssh/saphana.pem not accessible: No such file or directory.\nPermission denied (publickey).\r\nrsync: connection unexpectedly closed (0 bytes received so far) [sender]\nrsync error: error in rsync protocol data stream (code 12) at io.c(226) [sender=3.1.0]\n", "rc": 12}
failed: [52.53.200.0 -> 52.53.200.193] (item=52.53.200.193) => {"cmd": "/usr/bin/rsync --delay-updates -F --compress --archive --rsh 'ssh -i /home/peng/.ssh/saphana.pem -S none -o StrictHostKeyChecking=no' --rsync-path=\"sudo rsync\" --out-format='<<CHANGED>>%i %n%L' \"/etc/spark/conf\" \"52.53.200.0:/opt/spark/spark-1.6.2-bin-hadoop2.4/conf\"", "failed": true, "item": "52.53.200.193", "msg": "Warning: Identity file /home/peng/.ssh/saphana.pem not accessible: No such file or directory.\nPermission denied (publickey).\r\nrsync: connection unexpectedly closed (0 bytes received so far) [sender]\nrsync error: unexplained error (code 255) at io.c(226) [sender=3.1.0]\n", "rc": 255}
Apparently, it looks like ansible is trying to create permutative pairs between all of my 3 hosts and synchronize between every pairs (so 9 rsync is performed), how do I avoid this and command ansible to only conduct rsync locally?
UPDATE: I've changed my task definition to use delegate.host:
- name: Sync /etc/spark/conf to $SPARK_HOME/conf
synchronize: src=/etc/spark/conf dest={{spark_home}}/conf
delegate_to: delegate.host
tags: spark
But it is clearly not interpreted correctly by the ansible engine, the debugging log reveals that it is not substituted by the host IP address:
ESTABLISH SSH CONNECTION FOR USER: None
SSH: EXEC ssh -C -q -o ControlMaster=auto -o
ControlPersist=60s -o KbdInteractiveAuthentication=no -o
PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey
-o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/peng/.ansible/cp/ansible-ssh-%h-%p-%r
delegate.host '/bin/sh -c '"'"'( umask 77 && mkdir -p "echo $HOME/.ansible/tmp/ansible-tmp-1470667606.38-157157938048153" &&
echo ansible-tmp-1470667606.38-157157938048153="echo
$HOME/.ansible/tmp/ansible-tmp-1470667606.38-157157938048153" ) &&
sleep 0'"'"''
This looks like a deprecated feature, I'm using ansible 2.1.0.0

Solved:
- name: Sync /etc/spark/conf to $SPARK_HOME/conf
synchronize:
src: /etc/spark/conf
dest: "{{spark_home}}"
copy_links: true
delegate_to: "{{ inventory_hostname }}"
tags: spark
The delegate.host is probably removed in favour of the new variable.

Related

Ambari HDP test Kerberos client failed

I tried to enable Kerberos for Ambari HDP and I faced an issue like that: "test Kerberos client failed"
Error Message: Cannot run program "kinit": error=2, No such file or directory
I do following:
Setup kerberos server:
sudo yum -y install krb5-server krb5-libs krb5-workstation
Edit /etc/krb5.conf
[libdefaults]
default_realm = TEST.COM.VN
[realms]
TEST.COM.VN = {
kdc = domain.com
admin_server = domain.com
}
Edit /var/kerberos/krb5kdc/kadm5.acl
*/admin#TEST.COM.VN *
Then i create krb db
sudo kdb5_util create -s
Start service
sudo systemctl start krb5kdc; sudo systemctl start kadmin;
Create admin princinpal
sudo kadmin.local -q "addprinc admin/admin"; sudo systemctl restart kadmin
Setup JCE for all nodes
sudo unzip -o -j -q jce_policy-8.zip -d /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.332.b09-1.el7_9.x86_64/jre/lib/security/
sudo ambari-server restart
Enable kerberos with ambari winrar
Then failed
I checked log on ambari server, but not enough to debug. Ambari agent, kdc log, kadmin log is nothing useful
2022-07-04 17:35:49,643 ERROR [ambari-client-thread-315] KerberosHelperImpl:2419 - Cannot validate credentials: org.apache.ambari.server.AmbariException: Failed to execute the command: Cannot run program "kinit": error=2, No such file or directory
2022-07-04 17:35:49,645 ERROR [ambari-client-thread-315] AbstractResourceProvider:295 - Caught AmbariException when creating a resource
org.apache.ambari.server.AmbariException: Failed to execute the command: Cannot run program "kinit": error=2, No such file or directory
at org.apache.ambari.server.controller.KerberosHelperImpl.validateKDCCredentials(KerberosHelperImpl.java:2167)
at org.apache.ambari.server.controller.KerberosHelperImpl.handleTestIdentity(KerberosHelperImpl.java:2417)
at org.apache.ambari.server.controller.KerberosHelperImpl.createTestIdentity(KerberosHelperImpl.java:1114)
at org.apache.ambari.server.controller.AmbariManagementControllerImpl.createAction(AmbariManagementControllerImpl.java:4113)
at org.apache.ambari.server.controller.internal.RequestResourceProvider$1.invoke(RequestResourceProvider.java:283)
at org.apache.ambari.server.controller.internal.RequestResourceProvider$1.invoke(RequestResourceProvider.java:212)
at org.apache.ambari.server.controller.internal.AbstractResourceProvider.invokeWithRetry(AbstractResourceProvider.java:465)
at org.apache.ambari.server.controller.internal.AbstractResourceProvider.createResources(AbstractResourceProvider.java:288)
at org.apache.ambari.server.controller.internal.RequestResourceProvider.createResources(RequestResourceProvider.java:212)
at org.apache.ambari.server.controller.internal.ClusterControllerImpl.createResources(ClusterControllerImpl.java:296)
at org.apache.ambari.server.api.services.persistence.PersistenceManagerImpl.create(PersistenceManagerImpl.java:97)
at org.apache.ambari.server.api.handlers.CreateHandler.persist(CreateHandler.java:50)
at org.apache.ambari.server.api.handlers.BaseManagementHandler.handleRequest(BaseManagementHandler.java:68)
at org.apache.ambari.server.api.services.BaseRequest.process(BaseRequest.java:144)
at org.apache.ambari.server.api.services.BaseService.handleRequest(BaseService.java:164)
at org.apache.ambari.server.api.services.BaseService.handleRequest(BaseService.java:128)
at org.apache.ambari.server.api.services.RequestService.createRequests(RequestService.java:231)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
at com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:205)
at com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
at com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:302)
at com.sun.jersey.server.impl.uri.rules.SubLocatorRule.accept(SubLocatorRule.java:137)
at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
at com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
at com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)
at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1542)
at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1473)
at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1419)
at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1409)
at com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:409)
at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:558)
at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:733)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:865)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1655)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:320)
at org.springframework.security.web.access.intercept.FilterSecurityInterceptor.invoke(FilterSecurityInterceptor.java:127)
at org.springframework.security.web.access.intercept.FilterSecurityInterceptor.doFilter(FilterSecurityInterceptor.java:91)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:334)
at org.apache.ambari.server.security.authorization.AmbariAuthorizationFilter.doFilter(AmbariAuthorizationFilter.java:294)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:334)
at org.springframework.security.web.access.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:119)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:334)
at org.springframework.security.web.session.SessionManagementFilter.doFilter(SessionManagementFilter.java:137)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:334)
at org.springframework.security.web.authentication.AnonymousAuthenticationFilter.doFilter(AnonymousAuthenticationFilter.java:111)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:334)
at org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter.doFilter(SecurityContextHolderAwareRequestFilter.java:170)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:334)
at org.springframework.security.web.savedrequest.RequestCacheAwareFilter.doFilter(RequestCacheAwareFilter.java:63)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:334)
at org.apache.ambari.server.security.authentication.AmbariDelegatingAuthenticationFilter.doFilter(AmbariDelegatingAuthenticationFilter.java:135)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:334)
at org.apache.ambari.server.security.authorization.AmbariUserAuthorizationFilter.doFilter(AmbariUserAuthorizationFilter.java:95)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:334)
at org.springframework.security.web.authentication.logout.LogoutFilter.doFilter(LogoutFilter.java:116)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:334)
at org.springframework.security.web.header.HeaderWriterFilter.doFilterInternal(HeaderWriterFilter.java:74)
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:109)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:334)
at org.springframework.security.web.context.SecurityContextPersistenceFilter.doFilter(SecurityContextPersistenceFilter.java:105)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:334)
at org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter.doFilterInternal(WebAsyncManagerIntegrationFilter.java:56)
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:109)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:334)
at org.springframework.security.web.FilterChainProxy.doFilterInternal(FilterChainProxy.java:215)
at org.springframework.security.web.FilterChainProxy.doFilter(FilterChainProxy.java:178)
at org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:357)
at org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:270)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1642)
at org.apache.ambari.server.api.ContentTypeOverrideFilter.doFilter(ContentTypeOverrideFilter.java:146)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1642)
at org.apache.ambari.server.api.MethodOverrideFilter.doFilter(MethodOverrideFilter.java:73)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1642)
at org.apache.ambari.server.api.AmbariPersistFilter.doFilter(AmbariPersistFilter.java:53)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1642)
at org.apache.ambari.server.security.AbstractSecurityHeaderFilter.doFilter(AbstractSecurityHeaderFilter.java:130)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1642)
at org.eclipse.jetty.servlets.GzipFilter.doFilter(GzipFilter.java:51)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1642)
at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:533)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)
at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)
at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1595)
at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1340)
at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)
at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:473)
at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1564)
at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)
at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1242)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144)
at org.eclipse.jetty.server.handler.gzip.GzipHandler.handle(GzipHandler.java:690)
at org.apache.ambari.server.controller.AmbariHandlerList.processHandlers(AmbariHandlerList.java:221)
at org.apache.ambari.server.controller.AmbariHandlerList.processHandlers(AmbariHandlerList.java:210)
at org.apache.ambari.server.controller.AmbariHandlerList.handle(AmbariHandlerList.java:140)
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
at org.eclipse.jetty.server.Server.handle(Server.java:503)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:364)
at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:260)
at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:305)
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:103)
at org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:118)
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:333)
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:310)
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:168)
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:126)
at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:366)
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:765)
at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:683)
at java.lang.Thread.run(Thread.java:750)
Caused by: org.apache.ambari.server.serveraction.kerberos.KerberosOperationException: Failed to execute the command: Cannot run program "kinit": error=2, No such file or directory
at org.apache.ambari.server.serveraction.kerberos.KerberosOperationHandler.executeCommand(KerberosOperationHandler.java:737)
at org.apache.ambari.server.serveraction.kerberos.KDCKerberosOperationHandler.executeCommand(KDCKerberosOperationHandler.java:248)
at org.apache.ambari.server.serveraction.kerberos.KDCKerberosOperationHandler.init(KDCKerberosOperationHandler.java:322)
at org.apache.ambari.server.serveraction.kerberos.KDCKerberosOperationHandler.open(KDCKerberosOperationHandler.java:114)
at org.apache.ambari.server.serveraction.kerberos.MITKerberosOperationHandler.open(MITKerberosOperationHandler.java:95)
at org.apache.ambari.server.controller.KerberosHelperImpl.validateKDCCredentials(KerberosHelperImpl.java:2133)
... 117 more
Caused by: java.io.IOException: Cannot run program "kinit": error=2, No such file or directory
at java.lang.ProcessBuilder.start(ProcessBuilder.java:1048)
at org.apache.ambari.server.utils.ShellCommandUtil.runCommand(ShellCommandUtil.java:457)
at org.apache.ambari.server.serveraction.kerberos.KerberosOperationHandler.executeCommand(KerberosOperationHandler.java:733)
... 122 more
Caused by: java.io.IOException: error=2, No such file or directory
at java.lang.UNIXProcess.forkAndExec(Native Method)
at java.lang.UNIXProcess.<init>(UNIXProcess.java:247)
at java.lang.ProcessImpl.start(ProcessImpl.java:134)
at java.lang.ProcessBuilder.start(ProcessBuilder.java:1029)
... 124 more
How can i fix it. Thank you

Error when deploying to AWS lambda, yarn serverless nextjs monorepo

I currently have a monorepo setup with serverless and it builds alright, when I try a custom deploy script, I get the following error
DEBUG ─ Executing the template's components graph.
error:
Error: Command failed with ENOENT: node_modules/.bin/next build
spawn node_modules/.bin/next ENOENT
at Process.ChildProcess._handle.onexit (internal/child_process.js:268:19)
at onErrorNT (internal/child_process.js:470:16)
at processTicksAndRejections (internal/process/task_queues.js:84:21) {
errno: 'ENOENT',
code: 'ENOENT',
syscall: 'spawn node_modules/.bin/next',
path: 'node_modules/.bin/next',
spawnargs: [ 'build' ],
originalMessage: 'spawn node_modules/.bin/next ENOENT',
shortMessage: 'Command failed with ENOENT: node_modules/.bin/next build\n' +
'spawn node_modules/.bin/next ENOENT',
command: 'node_modules/.bin/next build',
escapedCommand: '"node_modules/.bin/next" build',
exitCode: undefined,
signal: undefined,
signalDescription: undefined,
stdout: '',
stderr: '',
failed: true,
timedOut: false,
isCanceled: false,
killed: false
}
4s › web › Error: Command failed with ENOENT: node_modules/.bin/next build
spawn node_modules/.bin/next ENOENT
I tried to set next.config.js with
const nextConfig = {
experimental: {
externalDir: true,
},
};
export default nextConfig;
Still the error persist.
my deploy script is "deploy": "AWS_SDK_LOAD_CONFIG=1 AWS_PROFILE=projec1 yarn components-v1 --debug"

How to check if an encrypted variable is decrypted?

I have an Ansible encrypted variable. Now I'd like to be able to run my playbook even when I don't unlock the variable (with --ask-vault-pass) and just skip the tasks that depend on it. Ideally with a warning saying that the task was skipped.
Now when I run my playbook without --ask-vault-pass, it fails with an error:
fatal: [...]: FAILED! => {"changed": false, "msg": "AnsibleError: An unhandled exception occurred while templating '{{ (samba_passwords |
string | from_yaml)[samba_username] }}'. Error was a <class 'ansible.parsing.vault.AnsibleVaultError'>, original message: Attempting to decrypt bu
t no vault secrets found"}
Is there a way how to check in the when: clause that an encrypted variable is not decrypted and thus inaccessible?
Q: "Check if an encrypted variable is decrypted. Skip the tasks that depend on it. Ideally with a warning saying that the task was skipped."
A: For example, given the file with the variable
shell> cat vars-test.yml
test_var1: test var1
Encrypt the file
shell> ansible-vault encrypt vars-test.yml
New Vault password:
Confirm New Vault password:
Encryption successful
shell> cat vars-test.yml
$ANSIBLE_VAULT;1.1;AES256
61373230346437306135303463393166323063656561623863306333313837666561653466393835
3738666532303836376139613766343930346263633032330a323336643061373039613330653237
30666364376266396633613162626536383161306262613062373239343232663935376364383431
6335623366613834360a336531656537626662376166323766376433653232633139383636613963
64356632633863353534323636313231633866613635343962383463636565303032
Then the playbook
shell> cat pb.yml
- hosts: test_01
tasks:
- include_vars: vars-test.yml
ignore_errors: true
- set_fact:
test_var1: "{{ test_var1|default('default') }}"
- name: Execute tasks if test_var1 was decrypted
block:
- debug:
msg: Execute task1
- debug:
msg: Execute task2
when: test_var1 != 'default'
gives (abridged)
shell> ansible-playbook pb.yml --ask-vault-pass
TASK [include_vars] ****
ok: [test_01]
TASK [set_fact] ****
ok: [test_01]
TASK [debug] ****
ok: [test_01] =>
msg: Execute task1
TASK [debug] ****
ok: [test_01] =>
msg: Execute task2
If you don't provide the command with the password the playbook gives (abridged)
shell> ansible-playbook pb.yml
PLAY [test_01] ****
TASK [include_vars] ****
fatal: [test_01]: FAILED! => changed=false
ansible_facts: {}
ansible_included_var_files: []
message: Attempting to decrypt but no vault secrets found
...ignoring
TASK [set_fact] ****
ok: [test_01]
TASK [debug] ****
skipping: [test_01]
TASK [debug] ****
skipping: [test_01]
I've researched but I haven't found anything to do that. The easy way to solve this case would be used ignore_errors: yes in the task.

Elasticsearch 2.4 nodes does not form cluster with ConnectTransportException

I am already running ELK stack with Elasticsearch(ES) 1.7 with docker container with 3 nodes, each running one ES container, running behind nginx server. Now I am trying to upgrade ES to 2.4.0. Root user is not allowed in ES 2.4.0 so I am using -Des.root.insecure.allow=true option.
#Pulling SLES12 thin base image
FROM private-registry-1
#Author
MAINTAINER xyz
# Pre-requisite - Adding repositories
RUN zypper ar private-registry-2
RUN zypper --no-gpg-checks -n refresh
#Install required packages and dependencies
RUN zypper -n in net-tools-1.60-764.185 wget-1.14-7.1 python-2.7.9-14.1 python-base-2.7.9-14.1 tar-1.27.1-7.1
#Downloading elasticsearch executable
ENV ES_VERSION=2.4.0
ENV ES_CLUSTER_NAME=ccs-elasticsearch
ENV ES_DIR="//opt//log-management//elasticsearch"
ENV ES_DATA_PATH="//data"
ENV ES_LOGS_PATH="//var//log"
ENV ES_CONFIG_PATH="${ES_DIR}//config"
ENV ES_REST_PORT=9200
ENV ES_INTERNAL_COM_PORT=9300
WORKDIR /opt/log-management
RUN wget private-registry-3/elasticsearch/elasticsearch/${ES_VERSION}.tar/elasticsearch-${ES_VERSION}.tar.gz --no-check-certificate
RUN tar -xzvf ${ES_DIR}-${ES_VERSION}.tar.gz \
&& rm ${ES_DIR}-${ES_VERSION}.tar.gz \
&& mv ${ES_DIR}-${ES_VERSION} ${ES_DIR} \
&& cp ${ES_DIR}/config/elasticsearch.yml ${ES_CONFIG_PATH}/elasticsearch-default.yml
#Exposing elasticsearch server container port to the HOST
EXPOSE ${ES_REST_PORT} ${ES_INTERNAL_COM_PORT}
#Removing binary files which are not needed
RUN zypper -n rm wget
# Removing zypper repos
RUN zypper rr caspiancs_common
COPY query-crs-es.sh ${ES_DIR}/bin/query-crs-es.sh
RUN chmod +x ${ES_DIR}/bin/query-crs-es.sh
COPY query-crs-wrapper.py ${ES_DIR}/bin/query-crs-wrapper.py
RUN chmod +x ${ES_DIR}/bin/query-crs-wrapper.py
ENV CRS_PARSER_PYTHON_SCRIPT="${ES_DIR}//bin//query-crs-wrapper.py"
#Copy elastic search bootstrap script
COPY elasticsearch-bootstrap-and-run.sh ${ES_DIR}/
RUN chmod +x ${ES_DIR}/elasticsearch-bootstrap-and-run.sh
COPY config-es-cluster ${ES_DIR}/bin/config-es-cluster
RUN chmod +x ${ES_DIR}/bin/config-es-cluster
COPY elasticsearch-config-script ${ES_DIR}/bin/elasticsearch-config-script
RUN chmod +x ${ES_DIR}/bin/elasticsearch-config-script
#Running elasticsearch executable
WORKDIR ${ES_DIR}
ENTRYPOINT ${ES_DIR}/elasticsearch-bootstrap-and-run.sh
Configuration file will be modified by elasticsearch-config and config-es-cluster, mentioned in Dockerfile, as follows:
#Bootstrap script to configure elasticsearch.yml file
echo "cluster.name: ${ES_CLUSTER_NAME}" > ${ES_CONFIG_PATH}/elasticsearch.yml
echo "path.data: ${ES_DATA_PATH}" >> ${ES_CONFIG_PATH}/elasticsearch.yml
echo "path.logs: ${ES_LOGS_PATH}" >> ${ES_CONFIG_PATH}/elasticsearch.yml
#Performance optimization settings
echo "index.number_of_replicas: 1" >> ${ES_CONFIG_PATH}/elasticsearch.yml
echo "index.number_of_shards: 3" >> ${ES_CONFIG_PATH}/elasticsearch.yml
#echo "discovery.zen.ping.multicast.enabled: false" >> ${ES_CONFIG_PATH}/elasticsearch.yml
#echo "bootstrap.mlockall: true" >> ${ES_CONFIG_PATH}/elasticsearch.yml
#echo "indices.memory.index_buffer_size: 50%" >> ${ES_CONFIG_PATH}/elasticsearch.yml
#Search thread pool
echo "threadpool.search.type: fixed" >> ${ES_CONFIG_PATH}/elasticsearch.yml
echo "threadpool.search.size: 20" >> ${ES_CONFIG_PATH}/elasticsearch.yml
echo "threadpool.search.queue_size: 100000" >> ${ES_CONFIG_PATH}/elasticsearch.yml
#Index thread pool
echo "threadpool.index.type: fixed" >> ${ES_CONFIG_PATH}/elasticsearch.yml
echo "threadpool.index.size: 60" >> ${ES_CONFIG_PATH}/elasticsearch.yml
echo "threadpool.index.queue_size: 200000" >> ${ES_CONFIG_PATH}/elasticsearch.yml
#publish host as container host address
#echo "network.publish_host: ${CONTAINER_HOST_ADDRESS}" >> ${ES_CONFIG_PATH}/elasticsearch.yml
#echo "network.bind_host: ${CONTAINER_HOST_ADDRESS}" >> ${ES_CONFIG_PATH}/elasticsearch.yml
#echo "network.publish_host: ${CONTAINER_PRIVATE_IP}" >> ${ES_CONFIG_PATH}/elasticsearch.yml
#echo "network.bind_host: ${CONTAINER_PRIVATE_IP}" >> ${ES_CONFIG_PATH}/elasticsearch.yml
#echo "network.host: ${CONTAINER_HOST_ADDRESS}" >> ${ES_CONFIG_PATH}/elasticsearch.yml
echo "network.host: 0.0.0.0" >> ${ES_CONFIG_PATH}/elasticsearch.yml
#echo "htpp.port: 9200" >> ${ES_CONFIG_PATH}/elasticsearch.yml
#echo "transport.tcp.port: 9300-9400" >> ${ES_CONFIG_PATH}/elasticsearch.yml
#configure elasticsearch.yml for clustering
echo 'discovery.zen.ping.unicast.hosts: [ELASTICSEARCH_IPS] ' >> ${ES_CONFIG_PATH}/elasticsearch.yml
echo "discovery.zen.minimum_master_nodes: 1" >> ${ES_CONFIG_PATH}/elasticsearch.yml
ELASTICSEARCH_IPS is array of IPs of other nodes, which is obtained by all nodes running a script called query-crs-es.sh. Eventually Array will have IPs of other two nodes of cluster. Please note they will be node's IP, not container private IPs.
When ever I try to run the container I use ansible. During start up, all nodes get up but failed to form cluster. I consistently get these error
Node1:
[2016-10-07 09:45:23,313][WARN ][bootstrap ] running as ROOT user. this is a bad idea!
[2016-10-07 09:45:23,474][INFO ][node ] [Dragon Lord] version[2.4.0], pid[1], build[ce9f0c7/2016-08-29T09:14:17Z]
[2016-10-07 09:45:23,474][INFO ][node ] [Dragon Lord] initializing ...
[2016-10-07 09:45:23,970][INFO ][plugins ] [Dragon Lord] modules [reindex, lang-expression, lang-groovy], plugins [], sites []
[2016-10-07 09:45:23,994][INFO ][env ] [Dragon Lord] using [1] data paths, mounts [[/data (/dev/mapper/platform-data)]], net usable_space [2.5tb], net total_space [2.5tb], spins? [possibly], types [xfs]
[2016-10-07 09:45:23,994][INFO ][env ] [Dragon Lord] heap size [989.8mb], compressed ordinary object pointers [true]
[2016-10-07 09:45:24,028][WARN ][threadpool ] [Dragon Lord] requested thread pool size [60] for [index] is too large; setting to maximum [32] instead
[2016-10-07 09:45:25,540][INFO ][node ] [Dragon Lord] initialized
[2016-10-07 09:45:25,540][INFO ][node ] [Dragon Lord] starting ...
[2016-10-07 09:45:25,687][INFO ][transport ] [Dragon Lord] publish_address {172.17.0.15:9300}, bound_addresses {[::]:9300}
[2016-10-07 09:45:25,693][INFO ][discovery ] [Dragon Lord] ccs-elasticsearch/5wNwWJRFRS-2dRY5AGqqGQ
[2016-10-07 09:45:28,721][INFO ][cluster.service ] [Dragon Lord] new_master {Dragon Lord}{5wNwWJRFRS-2dRY5AGqqGQ}{172.17.0.15}{172.17.0.15:9300}, reason: zen-disco-join(elected_as_master, [0] joins received)
[2016-10-07 09:45:28,765][INFO ][http ] [Dragon Lord] publish_address {172.17.0.15:9200}, bound_addresses {[::]:9200}
[2016-10-07 09:45:28,765][INFO ][node ] [Dragon Lord] started
[2016-10-07 09:45:28,856][INFO ][gateway ] [Dragon Lord] recovered [20] indices into cluster_state
Node2:
[2016-10-07 09:45:58,561][WARN ][bootstrap ] running as ROOT user. this is a bad idea!
[2016-10-07 09:45:58,729][INFO ][node ] [Defensor] version[2.4.0], pid[1], build[ce9f0c7/2016-08-29T09:14:17Z]
[2016-10-07 09:45:58,729][INFO ][node ] [Defensor] initializing ...
[2016-10-07 09:45:59,215][INFO ][plugins ] [Defensor] modules [reindex, lang-expression, lang-groovy], plugins [], sites []
[2016-10-07 09:45:59,237][INFO ][env ] [Defensor] using [1] data paths, mounts [[/data (/dev/mapper/platform-data)]], net usable_space [2.5tb], net total_space [2.5tb], spins? [possibly], types [xfs]
[2016-10-07 09:45:59,237][INFO ][env ] [Defensor] heap size [989.8mb], compressed ordinary object pointers [true]
[2016-10-07 09:45:59,266][WARN ][threadpool ] [Defensor] requested thread pool size [60] for [index] is too large; setting to maximum [32] instead
[2016-10-07 09:46:00,733][INFO ][node ] [Defensor] initialized
[2016-10-07 09:46:00,733][INFO ][node ] [Defensor] starting ...
[2016-10-07 09:46:00,833][INFO ][transport ] [Defensor] publish_address {172.17.0.16:9300}, bound_addresses {[::]:9300}
[2016-10-07 09:46:00,837][INFO ][discovery ] [Defensor] ccs-elasticsearch/RXALMe9NQVmbCz5gg1CwHA
[2016-10-07 09:46:03,876][WARN ][discovery.zen ] [Defensor] failed to connect to master [{Dragon Lord}{5wNwWJRFRS-2dRY5AGqqGQ}{172.17.0.15}{172.17.0.15:9300}], retrying...
ConnectTransportException[[Dragon Lord][172.17.0.15:9300] connect_timeout[30s]]; nested: ConnectException[Connection refused: /172.17.0.15:9300];
at org.elasticsearch.transport.netty.NettyTransport.connectToChannels(NettyTransport.java:1002)
at org.elasticsearch.transport.netty.NettyTransport.connectToNode(NettyTransport.java:937)
at org.elasticsearch.transport.netty.NettyTransport.connectToNode(NettyTransport.java:911)
at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:260)
at org.elasticsearch.discovery.zen.ZenDiscovery.joinElectedMaster(ZenDiscovery.java:444)
at org.elasticsearch.discovery.zen.ZenDiscovery.innerJoinCluster(ZenDiscovery.java:396)
at org.elasticsearch.discovery.zen.ZenDiscovery.access$4400(ZenDiscovery.java:96)
at org.elasticsearch.discovery.zen.ZenDiscovery$JoinThreadControl$1.run(ZenDiscovery.java:1296)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.ConnectException: Connection refused: /172.17.0.15:9300
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at org.jboss.netty.channel.socket.nio.NioClientBoss.connect(NioClientBoss.java:152)
at org.jboss.netty.channel.socket.nio.NioClientBoss.processSelectedKeys(NioClientBoss.java:105)
at org.jboss.netty.channel.socket.nio.NioClientBoss.process(NioClientBoss.java:79)
at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337)
at org.jboss.netty.channel.socket.nio.NioClientBoss.run(NioClientBoss.java:42)
at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
... 3 more
[2016-10-07 09:46:06,899][WARN ][discovery.zen ] [Defensor] failed to connect to master [{Dragon Lord}{5wNwWJRFRS-2dRY5AGqqGQ}{172.17.0.15}{172.17.0.15:9300}], retrying...
ConnectTransportException[[Dragon Lord][172.17.0.15:9300] connect_timeout[30s]]; nested: ConnectException[Connection refused: /172.17.0.15:9300];
at org.elasticsearch.transport.netty.NettyTransport.connectToChannels(NettyTransport.java:1002)
at org.elasticsearch.transport.netty.NettyTransport.connectToNode(NettyTransport.java:937)
at org.elasticsearch.transport.netty.NettyTransport.connectToNode(NettyTransport.java:911)
at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:260)
at org.elasticsearch.discovery.zen.ZenDiscovery.joinElectedMaster(ZenDiscovery.java:444)
at org.elasticsearch.discovery.zen.ZenDiscovery.innerJoinCluster(ZenDiscovery.java:396)
at org.elasticsearch.discovery.zen.ZenDiscovery.access$4400(ZenDiscovery.java:96)
at org.elasticsearch.discovery.zen.ZenDiscovery$JoinThreadControl$1.run(ZenDiscovery.java:1296)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.ConnectException: Connection refused: /172.17.0.15:9300
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at org.jboss.netty.channel.socket.nio.NioClientBoss.connect(NioClientBoss.java:152)
at org.jboss.netty.channel.socket.nio.NioClientBoss.processSelectedKeys(NioClientBoss.java:105)
at org.jboss.netty.channel.socket.nio.NioClientBoss.process(NioClientBoss.java:79)
at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337)
at org.jboss.netty.channel.socket.nio.NioClientBoss.run(NioClientBoss.java:42)
at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
... 3 more
[2016-10-07 09:46:09,917][WARN ][discovery.zen ] [Defensor] failed to connect to master [{Dragon Lord}{5wNwWJRFRS-2dRY5AGqqGQ}{172.17.0.15}{172.17.0.15:9300}], retrying...
ConnectTransportException[[Dragon Lord][172.17.0.15:9300] connect_timeout[30s]]; nested: ConnectException[Connection refused: /172.17.0.15:9300];
Node3:
[2016-10-07 09:45:58,624][WARN ][bootstrap ] running as ROOT user. this is a bad idea!
[2016-10-07 09:45:58,806][INFO ][node ] [Scarlet Beetle] version[2.4.0], pid[1], build[ce9f0c7/2016-08-29T09:14:17Z]
[2016-10-07 09:45:58,806][INFO ][node ] [Scarlet Beetle] initializing ...
[2016-10-07 09:45:59,341][INFO ][plugins ] [Scarlet Beetle] modules [reindex, lang-expression, lang-groovy], plugins [], sites []
[2016-10-07 09:45:59,363][INFO ][env ] [Scarlet Beetle] using [1] data paths, mounts [[/data (/dev/mapper/platform-data)]], net usable_space [2.5tb], net total_space [2.5tb], spins? [possibly], types [xfs]
[2016-10-07 09:45:59,363][INFO ][env ] [Scarlet Beetle] heap size [989.8mb], compressed ordinary object pointers [true]
[2016-10-07 09:45:59,390][WARN ][threadpool ] [Scarlet Beetle] requested thread pool size [60] for [index] is too large; setting to maximum [32] instead
[2016-10-07 09:46:00,795][INFO ][node ] [Scarlet Beetle] initialized
[2016-10-07 09:46:00,795][INFO ][node ] [Scarlet Beetle] starting ...
[2016-10-07 09:46:00,927][INFO ][transport ] [Scarlet Beetle] publish_address {172.17.0.16:9300}, bound_addresses {[::]:9300}
[2016-10-07 09:46:00,931][INFO ][discovery ] [Scarlet Beetle] ccs-elasticsearch/SFWrVwKRSUu--4KiZK4Kfg
[2016-10-07 09:46:03,965][WARN ][discovery.zen ] [Scarlet Beetle] failed to connect to master [{Dragon Lord}{5wNwWJRFRS-2dRY5AGqqGQ}{172.17.0.15}{172.17.0.15:9300}], retrying...
ConnectTransportException[[Dragon Lord][172.17.0.15:9300] connect_timeout[30s]]; nested: ConnectException[Connection refused: /172.17.0.15:9300];
at org.elasticsearch.transport.netty.NettyTransport.connectToChannels(NettyTransport.java:1002)
at org.elasticsearch.transport.netty.NettyTransport.connectToNode(NettyTransport.java:937)
at org.elasticsearch.transport.netty.NettyTransport.connectToNode(NettyTransport.java:911)
at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:260)
at org.elasticsearch.discovery.zen.ZenDiscovery.joinElectedMaster(ZenDiscovery.java:444)
at org.elasticsearch.discovery.zen.ZenDiscovery.innerJoinCluster(ZenDiscovery.java:396)
at org.elasticsearch.discovery.zen.ZenDiscovery.access$4400(ZenDiscovery.java:96)
at org.elasticsearch.discovery.zen.ZenDiscovery$JoinThreadControl$1.run(ZenDiscovery.java:1296)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.ConnectException: Connection refused: /172.17.0.15:9300
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at org.jboss.netty.channel.socket.nio.NioClientBoss.connect(NioClientBoss.java:152)
at org.jboss.netty.channel.socket.nio.NioClientBoss.processSelectedKeys(NioClientBoss.java:105)
at org.jboss.netty.channel.socket.nio.NioClientBoss.process(NioClientBoss.java:79)
at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337)
at org.jboss.netty.channel.socket.nio.NioClientBoss.run(NioClientBoss.java:42)
at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
... 3 more
[2016-10-07 09:46:06,990][WARN ][discovery.zen ] [Scarlet Beetle] failed to connect to master [{Dragon Lord}{5wNwWJRFRS-2dRY5AGqqGQ}{172.17.0.15}{172.17.0.15:9300}], retrying...
ConnectTransportException[[Dragon Lord][172.17.0.15:9300] connect_timeout[30s]]; nested: ConnectException[Connection refused: /172.17.0.15:9300];
at org.elasticsearch.transport.netty.NettyTransport.connectToChannels(NettyTransport.java:1002)
at org.elasticsearch.transport.netty.NettyTransport.connectToNode(NettyTransport.java:937)
at org.elasticsearch.transport.netty.NettyTransport.connectToNode(NettyTransport.java:911)
at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:260)
at org.elasticsearch.discovery.zen.ZenDiscovery.joinElectedMaster(ZenDiscovery.java:444)
at org.elasticsearch.discovery.zen.ZenDiscovery.innerJoinCluster(ZenDiscovery.java:396)
at org.elasticsearch.discovery.zen.ZenDiscovery.access$4400(ZenDiscovery.java:96)
at org.elasticsearch.discovery.zen.ZenDiscovery$JoinThreadControl$1.run(ZenDiscovery.java:1296)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
As you can see from logs, Node 2 and 3 are aware of master, Node1, but unable to connect. I have tried most of the configurations about network.host which you can see commented in configuration code and neither of them work.
Any leads will be appreciated.
This is the state of ports:
netstat -nlp | grep 9200
tcp 0 0 10.240.135.140:9200 0.0.0.0:* LISTEN 188116/docker-proxy
tcp 0 0 10.240.137.112:9200 0.0.0.0:* LISTEN 187240/haproxy
netstat -nlp | grep 9300
tcp 0 0 :::9300 :::* LISTEN 188085/docker-proxy
I was able to form cluster with following settings
network.publish_host=CONTAINER_HOST_ADDRESS i.e. address of node where the
container is running.
network.bind_host=0.0.0.0
transport.publish_port=9300
transport.publish_host=CONTAINER_HOST_ADDRESS
tranport.publish_host is important when your are running ES behind proxy/ load balancer such nginx or haproxy.

Symfony2 deployment by Capifony crash on symfony:assets:install (vendor/autoload.php)

Deploy crash on symfony:assets:install task
set :application, "example"
set :domain, "#{application}.com"
set :deploy_to, "/fake/path"
set :app_path, "app"
set :user, "opr"
set :use_composer, true
set :composer_options, "--no-dev --verbose --prefer-dist --optimize-autoloader"
set :update_vendors, true
set :repository, "ssh://hg#bitbucket.org/fake/path"
set :branch, "default"
set :scm, :mercurial
# Or: `accurev`, `bzr`, `cvs`, `darcs`, `subversion`, `mercurial`, `perforce`, or `none`
set :model_manager, "doctrine"
# Or: `propel`
role :web, domain # Your HTTP server, Apache/etc
role :app, domain, :primary => true # This may be the same as your `Web` server
role :db, domain, :primary => true # This is where Symfony2 migrations will run
set :keep_releases, 5
set :use_sudo, false
set :shared_files, ["app/config/parameters.yml"]
set :shared_children, [app_path + "/logs", web_path + "/uploads"]
after "deploy:update", "upload_parameters"
# after "symfony:assets:install", "symfony:assetic:dump"
after "deploy:create_symlink" do
run "chmod -R a+rw #{deploy_to}/current/app/cache"
run "chmod -R a+rw #{deploy_to}/current/app/logs"
end
task :upload_parameters do
origin_file = "app/config/parameters.yml"
destination_file = shared_path + "/app/config/parameters.yml"
try_sudo "mkdir -p #{File.dirname(destination_file)}"
top.upload(origin_file, destination_file)
end
after "deploy:restart", "deploy:cleanup"
# Be more verbose by uncommenting the following line
logger.level = Logger::MAX_LEVEL
# default_run_options[:pty] = true
ssh_options[:forward_agent] = true
here is full output:
* 2014-01-17 14:23:10 executing `deploy'
* 2014-01-17 14:23:10 executing `deploy:update'
** transaction: start
* 2014-01-17 14:23:10 executing `deploy:update_code'
triggering before callbacks for `deploy:update_code'
--> Updating code base with checkout strategy
executing locally: "hg log -r default --template \"{node|short}\""
command finished in 100ms
* executing "hg clone --noupdate ssh://hg#bitbucket.org/project /fake/path/releases/20140117122310 && hg update --repository /fake/path/releases/20140117122310 --clean 588549f82b3b && (echo 588549f82b3b > /fake/path/releases/20140117122310/REVISION)"
servers: ["example.com"]
Password:
[example.com] executing command
** [example.com :: out] requesting all changes
** adding changesets
** adding manifests
** adding file changes
** added 1090 changesets with 13637 changes to 9254 files
** [example.com :: out] 477 files updated, 0 files merged, 0 files removed, 0 files unresolved
command finished in 60514ms
* 2014-01-17 14:24:14 executing `deploy:finalize_update'
* executing "chmod -R g+w /fake/path/releases/20140117122310"
servers: ["example.com"]
[example.com] executing command
command finished in 207ms
--> Creating cache directory
* executing "sh -c 'if [ -d /fake/path/releases/20140117122310/app/cache ] ; then rm -rf /fake/path/releases/20140117122310/app/cache; fi'"
servers: ["example.com"]
[example.com] executing command
command finished in 157ms
* executing "sh -c 'mkdir -p /fake/path/releases/20140117122310/app/cache && chmod -R 0777 /fake/path/releases/20140117122310/app/cache'"
servers: ["example.com"]
[example.com] executing command
command finished in 158ms
* executing "chmod -R g+w /fake/path/releases/20140117122310/app/cache"
servers: ["example.com"]
[example.com] executing command
command finished in 154ms
* 2014-01-17 14:24:15 executing `deploy:share_childs'
--> Creating symlinks for shared directories
* executing "mkdir -p /fake/path/shared/app/logs"
servers: ["example.com"]
[example.com] executing command
command finished in 156ms
* executing "sh -c 'if [ -d /fake/path/releases/20140117122310/app/logs ] ; then rm -rf /fake/path/releases/20140117122310/app/logs; fi'"
servers: ["example.com"]
[example.com] executing command
command finished in 153ms
* executing "ln -nfs /fake/path/shared/app/logs /fake/path/releases/20140117122310/app/logs"
servers: ["example.com"]
[example.com] executing command
command finished in 151ms
* executing "mkdir -p /fake/path/shared/web/uploads"
servers: ["example.com"]
[example.com] executing command
command finished in 152ms
* executing "sh -c 'if [ -d /fake/path/releases/20140117122310/web/uploads ] ; then rm -rf /fake/path/releases/20140117122310/web/uploads; fi'"
servers: ["example.com"]
[example.com] executing command
command finished in 153ms
* executing "ln -nfs /fake/path/shared/web/uploads /fake/path/releases/20140117122310/web/uploads"
servers: ["example.com"]
[example.com] executing command
command finished in 151ms
--> Creating symlinks for shared files
* executing "mkdir -p /fake/path/shared/app/config"
servers: ["example.com"]
[example.com] executing command
command finished in 153ms
* executing "touch /fake/path/shared/app/config/parameters.yml"
servers: ["example.com"]
[example.com] executing command
command finished in 150ms
* executing "ln -nfs /fake/path/shared/app/config/parameters.yml /fake/path/releases/20140117122310/app/config/parameters.yml"
servers: ["example.com"]
[example.com] executing command
command finished in 153ms
--> Normalizing asset timestamps
* executing "find /fake/path/releases/20140117122310/web/css /fake/path/releases/20140117122310/web/images /fake/path/releases/20140117122310/web/js -exec touch -t 201401171224.16 {} ';' &> /dev/null || true"
servers: ["example.com"]
[example.com] executing command
command finished in 155ms
triggering after callbacks for `deploy:finalize_update'
* 2014-01-17 14:24:16 executing `symfony:composer:update'
triggering before callbacks for `symfony:composer:update'
* 2014-01-17 14:24:16 executing `symfony:composer:get'
* executing "ls -x /fake/path/releases"
servers: ["example.com"]
[example.com] executing command
command finished in 152ms
* executing "if [ -e /fake/path/releases/20140113143043/composer.phar ]; then echo 'true'; fi"
servers: ["example.com"]
[example.com] executing command
command finished in 153ms
--> Copying Composer from previous release
* executing "sh -c 'cp /fake/path/releases/20140113143043/composer.phar /fake/path/releases/20140117122310/'"
servers: ["example.com"]
[example.com] executing command
command finished in 161ms
* executing "if [ -e /fake/path/releases/20140117122310/composer.phar ]; then echo 'true'; fi"
servers: ["example.com"]
[example.com] executing command
command finished in 154ms
--> Updating Composer
* executing "sh -c 'cd /fake/path/releases/20140117122310 && php composer.phar self-update'"
servers: ["example.com"]
[example.com] executing command
command finished in 182ms
--> Updating Composer dependencies
* executing "sh -c 'cd /fake/path/releases/20140117122310 && php composer.phar update --no-dev --verbose --prefer-dist --optimize-autoloader'"
servers: ["example.com"]
[example.com] executing command
command finished in 185ms
* 2014-01-17 14:24:17 executing `symfony:bootstrap:build'
--> Building bootstrap file
* executing "if [ -e /fake/path/releases/20140117122310/bin/build_bootstrap ]; then echo 'true'; fi"
servers: ["example.com"]
[example.com] executing command
command finished in 154ms
* executing "sh -c 'cd /fake/path/releases/20140117122310 && test -f vendor/sensio/distribution-bundle/Sensio/Bundle/DistributionBundle/Resources/bin/build_bootstrap.php && php vendor/sensio/distribution-bundle/Sensio/Bundle/DistributionBundle/Resources/bin/build_bootstrap.php app || echo 'vendor/sensio/distribution-bundle/Sensio/Bundle/DistributionBundle/Resources/bin/build_bootstrap.php not found, skipped''"
servers: ["example.com"]
[example.com] executing command
** [out :: example.com] vendor/sensio/distribution-bundle/Sensio/Bundle/DistributionBundle/Resources/bin/build_bootstrap.php
command finished in 157ms
* 2014-01-17 14:24:18 executing `symfony:composer:dump_autoload'
--> Dumping an optimized autoloader
* executing "sh -c 'cd /fake/path/releases/20140117122310 && php composer.phar dump-autoload --optimize'"
servers: ["example.com"]
[example.com] executing command
command finished in 186ms
* 2014-01-17 14:24:18 executing `symfony:assets:install'
--> Installing bundle's assets
* executing "sh -c 'cd /fake/path/releases/20140117122310 && php app/console assets:install web --env=prod'"
servers: ["example.com"]
[example.com] executing command
*** [err :: example.com] PHP Warning: require(/fake/path/releases/20140117122310/app/../vendor/autoload.php): failed to open stream: No such file or directory in /fake/path/releases/20140117122310/app/autoload.php on line 5
*** [err :: example.com] PHP Stack trace:
*** [err :: example.com] PHP 1. {main}() /fake/path/releases/20140117122310/app/console:0
*** [err :: example.com] PHP 2. require_once() /fake/path/releases/20140117122310/app/console:10
*** [err :: example.com] PHP 3. require_once() /fake/path/releases/20140117122310/app/bootstrap.php.cache:3
*** [err :: example.com] PHP Fatal error: require(): Failed opening required '/fake/path/releases/20140117122310/app/../vendor/autoload.php' (include_path='.:/usr/share/php:/usr/share/pear') in /fake/path/releases/20140117122310/app/autoload.php on line 5
*** [err :: example.com] PHP Stack trace:
*** [err :: example.com] PHP 1. {main}() /fake/path/releases/20140117122310/app/console:0
*** [err :: example.com] PHP 2. require_once() /fake/path/releases/20140117122310/app/console:10
*** [err :: example.com] PHP 3. require_once() /fake/path/releases/20140117122310/app/bootstrap.php.cache:3
command finished in 188ms
*** [deploy:update_code] rolling back
* executing "rm -rf /fake/path/releases/20140117122310; true"
servers: ["example.com"]
[example.com] executing command
command finished in 307ms
failed: "sh -c 'sh -c '\\''cd /fake/path/releases/20140117122310 && php app/console assets:install web --env=prod'\\'''" on example.com

Resources