Been trying unsuccessfully to connect to EC-2 on AWS instance through Filezilla. Here is a screenshot of the settings I have on Filezilla Site manager
Here is the error log from Filezilla with the debug log set to 4
Trace: CControlSocket::DoClose(64)
Trace: CControlSocket::DoClose(64)
Trace: CControlSocket::DoClose(64)
Trace: CFileZillaEnginePrivate::ResetOperation(0)
Status: Connecting to ec2-54-251-155-167.ap-southeast-1.compute.amazonaws.com...
Trace: Going to execute /Users/zaidhumayun/Documents/FileZilla.app/Contents/MacOS/fzsftp
Response: fzSftp started, protocol_version=6
Trace: CSftpControlSocket::ConnectParseResponse(fzSftp started, protocol_version=6)
Trace: CSftpControlSocket::SendNextCommand()
Trace: CSftpControlSocket::ConnectSend()
Command: keyfile "/Users/zaidhumayun/Desktop/FirstInstance.pem"
Trace: CSftpControlSocket::ConnectParseResponse()
Trace: CSftpControlSocket::SendNextCommand()
Trace: CSftpControlSocket::ConnectSend()
Command: open "ubuntu#ec2-54-251-155-167.ap-southeast-1.compute.amazonaws.com" 22
Trace: Server version: SSH-2.0-OpenSSH_6.6.1p1 Ubuntu-2ubuntu2.8
Trace: We believe remote version has SSH-2 channel request bug
Trace: Using SSH protocol version 2
Trace: Doing ECDH key exchange with curve Curve25519 and hash SHA-256
Trace: Server also has ecdsa-sha2-nistp256/ssh-dss/ssh-rsa host keys, but we don't know any of them
Trace: Host key fingerprint is:
Trace: ssh-ed25519 256 7e:1c:94:69:78:c9:13:a3:38:0d:07:54:e9:28:5b:3b
Command: Trust new Hostkey: Once
Trace: Initialised AES-256 GCM client->server encryption
Trace: Initialised AES256 GCM client->server MAC algorithm (in ETM mode) (required by cipher)
Trace: Initialised AES-256 GCM server->client encryption
Trace: Initialised AES256 GCM server->client MAC algorithm (in ETM mode) (required by cipher)
Trace: Pageant is running. Requesting keys.
Trace: Pageant has 0 SSH-2 keys
Trace: Successfully loaded 1 key pair from file
Trace: Offered public key from "/Users/zaidhumayun/Desktop/FirstInstance.pem"
Trace: Server refused our key
Trace: Disconnected: No supported authentication methods available (server sent: publickey)
I have no idea why its not accepting my key pair instance. What do I need to do to fix this? Is there some way to re-download the key pair instance? Because according to the amazon documentation, you're allowed to download it once at instance launch.
According to this page, username should be 'bitnami', not 'ubuntu'
Related
We have two dotnet applications that publish and subscribe to RabbitMQ respectively. Our developers tested it locally and they work fine on their workstation (launch through Visual Studio). However, when we build them into container and run them, RabbitMQ logs will show the following error messages:
2022-11-01 09:29:15.998692+00:00 [info] <0.1560.868> accepting AMQP connection <0.1560.868>
2022-11-01 09:31:22.599983+00:00 [error] <0.2112.868> closing AMQP connection <0.2112.868>
2022-11-01 09:31:22.599983+00:00 [error] <0.2112.868> {handshake_timeout,frame_header}
And below is the error message on our dotnet container:
RabbitMQ.Client.Exceptions.BrokerUnreachableException: None of the specified endpoints were reachable
---> RabbitMQ.Client.Exceptions.PossibleAuthenticationFailureException: Possibly caused by authentication failure
---> RabbitMQ.Client.Exceptions.OperationInterruptedException: The AMQP operation was interrupted: AMQP close-reason, initiated by Library, code=541, text='Unexpected Exception', classId=0, methodId=0, cause=System.IO.IOException: Unable to read data from the transport connection: Connection reset by peer.
---> System.Net.Sockets.SocketException (104): Connection reset by peer
at System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 count)
--- End of inner exception stack trace ---
at System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 count)
at System.IO.BufferedStream.ReadByteSlow()
at System.IO.BufferedStream.ReadByte()
at RabbitMQ.Client.Impl.InboundFrame.ReadFrom(Stream reader, Byte[] frameHeaderBuffer, ArrayPool`1 pool, UInt32 maxMessageSize)
at RabbitMQ.Client.Impl.SocketFrameHandler.ReadFrame()
at RabbitMQ.Client.Framing.Impl.Connection.MainLoopIteration()
at RabbitMQ.Client.Framing.Impl.Connection.MainLoop()
at RabbitMQ.Client.Impl.SimpleBlockingRpcContinuation.GetReply(TimeSpan timeout)
at RabbitMQ.Client.Impl.ModelBase.ConnectionStartOk(IDictionary`2 clientProperties, String mechanism, Byte[] response, String locale)
at RabbitMQ.Client.Framing.Impl.Connection.StartAndTune()
--- End of inner exception stack trace ---
at RabbitMQ.Client.Framing.Impl.Connection.StartAndTune()
at RabbitMQ.Client.Framing.Impl.Connection.Open(Boolean insist)
at RabbitMQ.Client.Framing.Impl.Connection..ctor(IConnectionFactory factory, Boolean insist, IFrameHandler frameHandler, String clientProvidedName)
at RabbitMQ.Client.Framing.Impl.Connection..ctor(IConnectionFactory factory, Boolean insist, IFrameHandler frameHandler, ArrayPool`1 memoryPool, String clientProvidedName)
at RabbitMQ.Client.Framing.Impl.AutorecoveringConnection.Init(IFrameHandler fh)
at RabbitMQ.Client.Framing.Impl.AutorecoveringConnection.Init(IEndpointResolver endpoints)
at RabbitMQ.Client.ConnectionFactory.CreateConnection(IEndpointResolver endpointResolver, String clientProvidedName)
--- End of inner exception stack trace ---
I have spent two days googling and troubleshooting but I am not a developer myself and couldn't find anything similar to our issue. Any help and guidance would be appreciated.
RabbitMQ version: 3.10.5
Erlang version: 24.3.4
.NET version: 6.0.10
RabbitMQ Client Library version: 6.2.1
I face following issues when trying to connect from my PS using either PowerShell or Cygwin to AWS on which my Wordpress site is hosted (Bitnami).
(I simply what to log in to the server either this way or using Putty as described here (LINK Putty is throwing an error "using username bitnami. Server refused our key. No supported authentication methods available. server sent publickey")
What I tried so far:
I execute either or both of the following commands...
chmod 600 <key-pair-from-aws>.pem
chmod 400 <key-pair-from-aws>.pem
(When I logged in to ec2 instance, under Key Pairs section I saw an entry, but I could not download it. That's why I generated a new key pair and that is the file I am using in the commands below.)
Then I enter the following command...
ssh bitnami#<public-ip-address> -i <key-pair-from-aws>.pem
... I get the following error:
Permissions for '(key-pair-from-aws).pem' are too open.
It is required that your private key files are NOT accessible by others.
This private key will be ignored.
Load key ".pem": bad permissions
bitnami#(public-ip-address): Permission denied (publickey).
Now, if I select the file on the PC "Properties -> Security -> Advanced -> disable inheritance", and then remove every user except my user, and then execute the same command ...
ssh bitnami#<public-ip-address> -i <key-pair-from-aws>.pem
... I get the following error:
bitnami#<public-ip-address>: Permission denied (publickey).
here I am stuck because I do not have any idea how to proceed further.
Searching on Stackoverflow and google I could not find anything to help me solve this issue.
can anyone please help with concrete, step-by-step instructions?
Thank you!
Update: here is the result of the command
$ ssh -v -i "pem-file-name.pem" bitnami#
> OpenSSH_for_Windows_8.1p1, LibreSSL 3.0.2
debug1: Connecting to <public-ip-address> [<public-ip-address>] port 22.
debug1: Connection established.
debug1: identity file kljuc_par_ime.pem type -1
debug1: identity file kljuc_par_ime.pem-cert type -1
debug1: Local version string SSH-2.0-OpenSSH_for_Windows_8.1
debug1: Remote protocol version 2.0, remote software version OpenSSH_8.4p1 Debian-
5+deb11u1
debug1: match: OpenSSH_8.4p1 Debian-5+deb11u1 pat OpenSSH* compat 0x04000000
debug1: Authenticating to <public-ip-address>:22 as 'bitnami'
debug1: SSH2_MSG_KEXINIT sent
debug1: SSH2_MSG_KEXINIT received
debug1: kex: algorithm: curve25519-sha256
debug1: kex: host key algorithm: ecdsa-sha2-nistp256
debug1: kex: server->client cipher: chacha20-poly1305#openssh.com MAC: <implicit>
compression: none
debug1: kex: client->server cipher: chacha20-poly1305#openssh.com MAC: <implicit>
compression: none
debug1: expecting SSH2_MSG_KEX_ECDH_REPLY
debug1: Server host key: <deleted for sec purposes> SHA256:<deleted for security purposes>
debug1: Host '<public-ip-address>' is known and matches the ECDSA host key.
debug1: Found key in C:\\Users\\My-User/.ssh/known_hosts:1
debug1: rekey out after 134217728 blocks
debug1: SSH2_MSG_NEWKEYS sent
debug1: expecting SSH2_MSG_NEWKEYS
debug1: SSH2_MSG_NEWKEYS received
debug1: rekey in after 134217728 blocks
debug1: pubkey_prepare: ssh_get_authentication_socket: No such file or directory
debug1: Will attempt key: kljuc_par_ime.pem explicit
debug1: SSH2_MSG_EXT_INFO received
debug1: kex_input_ext_info: server-sig-algs=<ssh-ed25519,sk-ssh-ed25519#openssh.co
m,ssh-rsa,rsa-sha2-256,rsa-sha2-512,ssh-dss,ecdsa-sha2-nistp256,ecdsa-sha2-nistp38
4,ecdsa-sha2-nistp521,sk-ecdsa-sha2-nistp256#openssh.com,webauthn-sk-ecdsa-sha2-ni
stp256#openssh.com>
debug1: SSH2_MSG_SERVICE_ACCEPT received
debug1: Authentications that can continue: publickey
debug1: Next authentication method: publickey
debug1: Trying private key: kljuc_par_ime.pem
debug1: Authentications that can continue: publickey
debug1: No more authentication methods to try.
bitnami#<public-ip-address>: Permission denied (publickey)
To use a keypair with an Amazon EC2 instance, you should specify the keypair when launching the instance. It is not possible to SSH into an instance using a keypair generated after the instance is launched.
Also, Bitnami AMIs generate a random password for Wordpress, which can be extracted from the System Log after the instance boots.
See: Bitnami: Find application credentials
The scenario is the same as Hazelcast tcp-ip configuration cluster: Unwanted IPs join the cluster even after cluster-name is specified
When I finished starting two nodes in cluster1, everything ran OK. However, when I ran one node from cluster2, I see the following error after a while. In the log, I have masked the IP with "machineC_IP" and "transformed_IP". Note that I have turned on the REST enabled as
hazelcast:
rest:
enabled: true
log:
com.hazelcast.logging.StandardLoggerFactory$StandardLogger [hz.thirsty_brahmagupta.IO.thread-in-1] [machineC_IP]:5702 [dev] [4.0.1] Connection[id=5, /machineC_IP:5702->/transformed_IP:51320, qualifier=null, endpoint=null, alive=false, connectionType=NONE] closed. Reason: Exception in Connection[id=5, /machineC_IP:5702->/transformed_IP:51320, qualifier=null, endpoint=null, alive=true, connectionType=NONE], thread=hz.thirsty_brahmagupta.IO.thread-in-1
java.lang.IllegalStateException: REST API is not enabled.
at com.hazelcast.internal.nio.tcp.UnifiedProtocolDecoder.onRead(UnifiedProtocolDecoder.java:105)
at com.hazelcast.internal.networking.nio.NioInboundPipeline.process(NioInboundPipeline.java:137)
at com.hazelcast.internal.networking.nio.NioThread.processSelectionKey(NioThread.java:382)
at com.hazelcast.internal.networking.nio.NioThread.processSelectionKeys(NioThread.java:367)
at com.hazelcast.internal.networking.nio.NioThread.selectLoop(NioThread.java:293)
at com.hazelcast.internal.networking.nio.NioThread.run(NioThread.java:248)
com.hazelcast.logging.StandardLoggerFactory$StandardLogger [hz.suspicious_brahmagupta.IO.thread-in-2] [machineC_IP]:5701 [cluster2] [4.0.1] Connection[id=18, /machineC_IP:5701->/transformed_IP:46468, qualifier=null, endpoint=null, alive=false, connectionType=NONE] closed. Reason: Exception in Connection[id=18, /machineC_IP:5701->/transformed_IP:46468, qualifier=null, endpoint=null, alive=true, connectionType=NONE], thread=hz.suspicious_brahmagupta.IO.thread-in-2
java.lang.IllegalStateException: Unknown protocol: I^#^#
at com.hazelcast.internal.nio.tcp.UnifiedProtocolDecoder.onRead(UnifiedProtocolDecoder.java:116)
at com.hazelcast.internal.networking.nio.NioInboundPipeline.process(NioInboundPipeline.java:137)
at com.hazelcast.internal.networking.nio.NioThread.processSelectionKey(NioThread.java:382)
at com.hazelcast.internal.networking.nio.NioThread.processSelectionKeys(NioThread.java:367)
at com.hazelcast.internal.networking.nio.NioThread.selectLoop(NioThread.java:293)
at com.hazelcast.internal.networking.nio.NioThread.run(NioThread.java:248)
I am trying to deploy Maximo Anywhere apps from MacOS X system using ./build.sh command. It is always failing with:
Resexception: unexpected response, Not able to access <an URL>
I am not able to understand what is causing this issue.
The mobile first server is deployed in a Cloud server.
Please let me know if you have faces this issue.
Below is the error log I can see:
8/7/18 16:32:45:610 AEST] 000000bc WASRuntimeMBe I SOAP connection with port number 8879
[8/7/18 16:32:45:610 AEST] 000000bc WASRuntimeMBe I Establishing SOAP connection on *actualserverName* with port number 8879
[8/7/18 16:32:57:985 AEST] 00000096 WorklightILMT I com.worklight.core.ilmt.WorklightILMTLogger dumpLicense FWLSE0277I: Creating an ILMT record in the file 'D:\IBM\WebSphere\AppServer\profiles\ctgAppSrv01\logs\worklight\caffeda003ba82a720f5f584f060728a.slmtag'. [project MaximoAnywhere]
[8/7/18 16:32:59:126 AEST] 0000005a ProjectManage I com.worklight.core.jmx.ProjectManagementMXBeanImpl logTransactionDetails FWLSE0275I: Starting transaction with ID 14 for 'deployAdapter'. [project MaximoAnywhere]
[8/7/18 16:33:00:891 AEST] 0000005a StatusMessage E StatusMessage createStatusMessage Preparation to deploy adapter failed: HTTP/1.1 404 Not Found while accessing MobileFirst artifact URL: https://*aliashostName*:443/wladmin/otu/1.0/28656effffffd52effffff871bffffffc759ffffffc4ffffffb3ffffffee70377b3f/runtimes/MaximoAnywhere/downloads/adapters/Temporary317569284
java.io.IOException: HTTP/1.1 404 Not Found while accessing MobileFirst artifact URL: https://*aliashostName*:443/wladmin/otu/1.0/28656effffffd52effffff871bffffffc759ffffffc4ffffffb3ffffffee70377b3f/runtimes/MaximoAnywhere/downloads/adapters/Temporary317569284
at com.worklight.common.util.HttpUtil.getBytesFromURL(HttpUtil.java:630)
at com.worklight.integration.services.impl.AdapterManagementServiceBean.readAdapterContent(AdapterManagementServiceBean.java:181)
at com.worklight.integration.services.impl.AdapterManagementServiceBean.deployAdapter(AdapterManagementServiceBean.java:112)
at com.worklight.mgmt.impl.AdapterManagementImpl.deployAdapter(AdapterManagementImpl.java:52)
at com.worklight.core.jmx.ProjectManagementMXBeanImpl.deployAdapter(ProjectManagementMXBeanImpl.java:1488)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:95)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:56)
at java.lang.reflect.Method.invoke(Method.java:620)
at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:88)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:95)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:56)
at java.lang.reflect.Method.invoke(Method.java:620)
at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:292)
at com.sun.jmx.mbeanserver.ConvertingMethod.invokeWithOpenReturn(ConvertingMethod.java:206)
at com.sun.jmx.mbeanserver.ConvertingMethod.invokeWithOpenReturn(ConvertingMethod.java:188)
at com.sun.jmx.mbeanserver.MXBeanIntrospector.invokeM2(MXBeanIntrospector.java:130)
at com.sun.jmx.mbeanserver.MXBeanIntrospector.invokeM2(MXBeanIntrospector.java:67)
at com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:250)
at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:151)
at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:265)
at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:832)
at com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:814)
at com.ibm.ws.management.AdminServiceImpl$1.run(AdminServiceImpl.java:1350)
at com.ibm.ws.security.util.AccessController.doPrivileged(AccessController.java:118)
at com.ibm.ws.management.AdminServiceImpl.invoke(AdminServiceImpl.java:1243)
at com.ibm.ws.management.connector.AdminServiceDelegator.invoke(AdminServiceDelegator.java:181)
at com.ibm.ws.management.connector.ipc.CallRouter.route(CallRouter.java:247)
at com.ibm.ws.management.connector.ipc.IPCConnectorInboundLink.doWork(IPCConnectorInboundLink.java:360)
at com.ibm.ws.management.connector.ipc.IPCConnectorInboundLink$IPCConnectorReadCallback.complete(IPCConnectorInboundLink.java:602)
at com.ibm.ws.ssl.channel.impl.SSLReadServiceContext$QueuedWork.run(SSLReadServiceContext.java:1987)
at com.ibm.ws.util.ThreadPool$Worker.run(ThreadPool.java:1892)
Caused by: java.io.IOException: HTTP/1.1 404 Not Found
at com.worklight.common.util.HttpUtil.getBytesFromURL(HttpUtil.java:627)
... 32 more
[8/7/18 16:33:00:970 AEST] 000000af BaseTransacti E Result: MaximoAnywhere: worklight/ctgCell01/ctgNode01/172.23.100.7: HTTP/1.1 404 Not Found while accessing MobileFirst artifact URL: https://*aliashostName*:443/wladmin/otu/1.0/28656effffffd52effffff871bffffffc759ffffffc4ffffffb3ffffffee70377b3f/runtimes/MaximoAnywhere/downloads/adapters/Temporary317569284
[8/7/18 16:33:01:001 AEST] 0000005a ProjectManage I com.worklight.core.jmx.ProjectManagementMXBeanImpl logTransactionDetails FWLSE0275I: Starting transaction with ID 14 for 'reject'. [project MaximoAnywhere]
[8/7/18 16:33:01:079 AEST] 000000af BaseTransacti I Result: MaximoAnywhere: worklight/ctgCell01/ctgNode01/172.23.100.7: Rollback
I've read somewhere that you're not supposed to deploy the apps to the real server from a local, remote environment. Is there a team administrating the server on the Cloud? If so, I would recommend providing the Cloud Team with your (perhaps) updated application folders and asking them to perform the deploy for you, from the server itself.
I am attempting to bootstrap a private OpenStack cloud using Cloudify 2.7.1. It boots up the Linux instance correctly but fails "Uploading files to 192.168.10.XXX." due to an SFTP problem : "Could not determine the type of file "sftp://root:***#192.168.10.xxx/root/gs-files".".
I can access to the Instance using ssh (there is no probleme in the connection). I tried with other images (CentOS, Ubuntu, Cerros, ...) but always the same error !!
can anyone help me please ?
I attached a screenshot of the network topology created by Cloudify, and the stack trace.
Full stack trace:
2015-04-30 10:26:27,470 INFO [org.cloudifysource.shell.commands.AbstractGSCommand] - Setting security profile to "nonsecure".
2015-04-30
10:26:27,589 INFO
[org.cloudifysource.shell.commands.AbstractGSCommand] - Bootstrapping
cloud openstack-havana. This may take a few minutes.
2015-04-30
10:26:27,677 INFO
[org.cloudifysource.esc.driver.provisioning.BaseProvisioningDriver] -
Setup network configuration for managers
2015-04-30 10:26:27,677
INFO [org.cloudifysource.esc.driver.provisioning.BaseProvisioningDriver]
- Using management network : Cloudify-Management-Network
2015-04-30
10:26:51,536 INFO
[org.cloudifysource.esc.shell.listener.CliAgentlessInstallerListener] -
Attempting to access Management VM 192.168.10.241.
2015-04-30
10:27:10,551 INFO
[org.cloudifysource.esc.shell.listener.CliAgentlessInstallerListener] -
Uploading files to 192.168.10.241.
2015-04-30 10:27:15,708 WARNING [com.jcraft.jsch] - Permanently added '192.168.10.241' (RSA) to the list of known hosts.
2015-04-30
10:27:25,998 INFO
[org.cloudifysource.esc.shell.installer.CloudGridAgentBootstrapper] -
Failed accessing management VM 192.168.10.241 Reason: Failed to set up
file transfer: Unknown message with code "Could not determine the type
of file "sftp://cirros#192.168.10.241/cirros/gs-files".".; Caused by:
org.cloudifysource.esc.installer.InstallerException: Failed to set up
file transfer: Unknown message with code "Could not determine the type
of file "sftp://cirros#192.168.10.241/cirros/gs-files".".
2015-04-30
10:27:26,210 INFO
[org.cloudifysource.esc.driver.provisioning.openstack.OpenStackCloudifyDriver]
- Deleting Floating ip:
FloatingIp[floatingNetworkId=15578898-5e6b-44d9-a73a-1328ca6ea140,floatingIpAddress=192.168.10.241,portId=4b8dc211-12e8-4383-8799-f783d2786e98,id=593d8424-cfec-41ed-8204-ed8609366416]
2015-04-30
10:27:29,607 SEVERE
[org.cloudifysource.shell.commands.AbstractGSCommand] - Failed to set up
file transfer: Unknown message with code "Could not determine the type
of file "sftp://cirros#192.168.10.241/cirros/gs-files".". :
org.cloudifysource.esc.installer.InstallerException: Failed to set up
file transfer: Unknown message with code "Could not determine the type
of file "sftp://cirros#192.168.10.241/cirros/gs-files".".
at org.cloudifysource.esc.installer.filetransfer.VfsFileTransfer.initialize(VfsFileTransfer.java:206)
at org.cloudifysource.esc.installer.AgentlessInstaller.uploadFilesToServer(AgentlessInstaller.java:306)
at org.cloudifysource.esc.installer.AgentlessInstaller.installOnMachineWithIP(AgentlessInstaller.java:210)
at org.cloudifysource.esc.shell.installer.CloudGridAgentBootstrapper$1.call(CloudGridAgentBootstrapper.java:865)
at org.cloudifysource.esc.shell.installer.CloudGridAgentBootstrapper$1.call(CloudGridAgentBootstrapper.java:860)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
Caused
by: org.apache.commons.vfs2.FileSystemException: Unknown message with
code "Could not determine the type of file
"sftp://cirros#192.168.10.241/cirros/gs-files".".
at org.apache.commons.vfs2.provider.sftp.SftpFileObject.refresh(SftpFileObject.java:95)
at org.apache.commons.vfs2.provider.AbstractFileSystem.resolveFile(AbstractFileSystem.java:366)
at org.apache.commons.vfs2.provider.AbstractFileSystem.resolveFile(AbstractFileSystem.java:317)
at org.apache.commons.vfs2.provider.AbstractOriginatingFileProvider.findFile(AbstractOriginatingFileProvider.java:85)
at org.apache.commons.vfs2.provider.AbstractOriginatingFileProvider.findFile(AbstractOriginatingFileProvider.java:65)
at org.apache.commons.vfs2.impl.DefaultFileSystemManager.resolveFile(DefaultFileSystemManager.java:693)
at org.apache.commons.vfs2.impl.DefaultFileSystemManager.resolveFile(DefaultFileSystemManager.java:621)
at org.cloudifysource.esc.installer.filetransfer.VfsFileTransfer.resolveTargetDirectory(VfsFileTransfer.java:218)
at org.cloudifysource.esc.installer.filetransfer.VfsFileTransfer.initialize(VfsFileTransfer.java:203)
... 8 more
Caused
by: org.apache.commons.vfs2.FileSystemException: Could not determine
the type of file "sftp://cirros#192.168.10.241/cirros/gs-files".
at org.apache.commons.vfs2.provider.AbstractFileObject.getType(AbstractFileObject.java:505)
at org.apache.commons.vfs2.provider.sftp.SftpFileObject.refresh(SftpFileObject.java:91)
... 16 more
Caused by: org.apache.commons.vfs2.FileSystemException: Could not connect to SFTP server at "sftp://cirros#192.168.10.241/".
at org.apache.commons.vfs2.provider.sftp.SftpFileSystem.getChannel(SftpFileSystem.java:153)
at org.apache.commons.vfs2.provider.sftp.SftpFileObject.statSelf(SftpFileObject.java:151)
at org.apache.commons.vfs2.provider.sftp.SftpFileObject.doGetType(SftpFileObject.java:114)
at org.apache.commons.vfs2.provider.AbstractFileObject.getType(AbstractFileObject.java:496)
... 17 more
Caused by: com.jcraft.jsch.JSchException: java.io.IOException: Pipe closed
at com.jcraft.jsch.ChannelSftp.start(ChannelSftp.java:288)
at com.jcraft.jsch.Channel.connect(Channel.java:152)
at com.jcraft.jsch.Channel.connect(Channel.java:145)
at org.apache.commons.vfs2.provider.sftp.SftpFileSystem.getChannel(SftpFileSystem.java:130)
... 20 more
Caused by: java.io.IOException: Pipe closed
at java.io.PipedInputStream.read(PipedInputStream.java:308)
at java.io.PipedInputStream.read(PipedInputStream.java:378)
at com.jcraft.jsch.ChannelSftp.fill(ChannelSftp.java:2665)
at com.jcraft.jsch.ChannelSftp.header(ChannelSftp.java:2691)
at com.jcraft.jsch.ChannelSftp.start(ChannelSftp.java:257)
... 23 more
Looks like you are trying to sftp into a cirros instance - I am not sure cirros even supports sftp. You can try this by using the sftp command line utility.
In general, sftp has to be configured and available on the target machine.
You can try using the SCP file transfer mode by setting this in your compute template:
fileTransfer org.cloudifysource.domain.cloud.FileTransferModes.SCP
If you are really using cirros, I suspect bootstrapping will fail. Cloudify was never tested on cirros. I think cirros is lacking some very basic utilities (I think it is not running bash. Not sure if it has wget). Cirros was never meant as a generic distribution - it is meant for testing your cloud's basic functionality.
One more thing - Cloudify 2 has reached End-of-Life - it is no longer supported. You should check out Cloudify 3.