ssh into AWS Batch jobs - r

I would like to communicate with AWS Batch jobs from a local R process in the same way that Davis Vaughn demonstrated for EC2 at https://gist.github.com/DavisVaughan/865d95cf0101c24df27b37f4047dd2e5. The AWS Batch documentation describes how to set up a key pair and security group for batch jobs. However, I could not find detailed instructions about how to find the IP address of a job's instance or what user name I need. The IP address in particular is not available in the console when I run the job, and aws batch describe-jobs --jobs prints out an empty "jobs": [] JSON string. Where do I find the information I need to ssh into a job's instance? (In my use case, I would prefer the IP address instead of the host name.)

Posting here in case this helps someone. The instance should show up under "Running instances" in your ec2 console. You should be able to use the public ip address specified there. Make sure you configured your batch job to use your ec2 key pair and the correct user name (ec2-user for amazon linux 2). eg. ssh -i "your_keypair.pem" ec2-user#XX.XXX.XX.XXX.

Related

I need to connect with an instance which is created in a VMware based eucalyptus cloud platform but i don't understand how to do it?

Connect to instance: i-38942195
To connect to your instance, be sure security group my-test-security-group has TCP port 22 open to inbound traffic and then perform the following steps (these instructions do not apply if you did not select a key pair when you launched this instance):
Open an SSH terminal window.
Change your directory to the one where you stored your key file my-test-keypair.pem
Run the following command to set the correct permissions for your key file:
chmod 400 my-test-keypair.pem
Connect to your instance via its public IP address by running the following command:
ssh -i my-test-keypair.pem root#192.168.0.29
Eucalyptus no longer supports VMware, but to generally troubleshoot instance connectivity you would first check that you are using a known good image such as those available via:
# python <(curl -Ls https://eucalyptus.cloud/images)
and ensure that the instance booted correctly:
# euca-get-console-output i-38942195
if that looks good (check for instance meta-data access for the SSH key) then check that the security group rules are correct, and that the instance is running using the expected security group and SSH key.
VMWare deprecation notice from version 4.1:
Support for VMWare features in Eucalyptus has been deprecated and will be removed in a future release.
http://docs.eucalyptus.cloud/eucalyptus/4.4.5/index.html#release-notes/4.1.0/4.1.0_rn_features.html
Euca2ools command:
http://docs.eucalyptus.cloud/eucalyptus/4.4.5/index.html#euca2ools-guide/euca-get-console-output.html

How to do SSH key exchange between VMs via HEAT template (Not only between Control node and VMs)

I am newbie to openstack. I am creating a stack using HEAT template. In the yaml file I mentioned the key name as
parameters: # Common parameters
key_name: my-key-pair
After the stack is created, I am able to ssh to all VMs from my control node without password, successfully like this:
ssh -i /root/my-key-pair.pem
user#instanceip
My requirement here is, similarly I need to do ssh between the VMs also. Just like I did between ControlNode and VMs, I wanted to do ssh without password from VM1 to VM2.
If I copy the pem file to VM1, then I can do ssh without password from this VM1 to other VMS like
ssh -i /VM1-home/my-key-pair.pem
user#otherinstanceip
But, is there a way that this can be accomplished during stack creation itself? So that, immediately after stack creation via heat template, I can ssh from any instance to other instances?
Can someone help please.
Thank You,
Subeesh
You can do this without HEAT.
You should be able to make use of ssh agent forwarding.
steps:
Start the ssh-agent in the background.
eval "$(ssh-agent -s)"
Add your key to the agent
ssh-add /root/my-key-pair.pem
Then ssh into the first host, you should be able to jump between servers now.
The way of doing it with HEAT would be to place that pem file into the correct location on the created instances, this should be possible with the personality function
personality: {"/root/my-key-pair.pem": {get_file: "pathtopemfilelocaly"}}

How to execute a command on hosts (physical machines) via OpenStack code?

I am trying to modify/add some OpenStack code to implement such functionality: after users click one button, some command will be executed on the specified host (e.g. one compute node).
One user scenario is, enable KSM kernel feature on one specified host. All need to do is to run "echo 1 > /sys/kernel/mm/ksm/run". Now I can get the IP of the host (some compute node), but how to execute the above command via OpenStack code?
(I checked all the Nova APIs. It seems there is no such Nova API to execute a command on a host. Also, I checked all the Ironic APIs. The same result.)

How do I refer to my local computer for scp'ing when logged into remote?

This must be a really simple question, but I am trying to move a file from a remote server to my local computer, while logged into the remote (via ssh).
All of the guides say to just use
scp name#remote:/path/to/file local/path/to/file
But as far as I can understand, that would be what I would use from my local machine. From the remote machine, I assume that I want to use something like
scp /path/to/file my_local_computer:/local/path/to/file
but (if that's even correct) how do I know what to put in for my_local_computer?
Thanks!
You can automatically figure out where you're logged in from by checking the environment variables SSH_CONNECTION and/or SSH_CLIENT. SSH_CONNECTION for example shows the client address, the outgoing port on the client, the server address and the incoming port on the server. See section ENVIRONMENT in man ssh
So, if you want to copy a file from the server to the client from which you're logged in from, the following (which infers the client ip by taking the first part of SSH_CONNECTION) should work:
scp /path/to/file $(echo $SSH_CONNECTION | cut -f 1 -d ' '):/local/path/to/file
Andreas
You are on the right track! The man page for scp should tell you how to do what you want: http://linux.die.net/man/1/scp
If you are having trouble understanding the man page, then I will attempt to instruct you:
If you want to push a file from your local machine to a remote machine
scp /path/to/local/file testuser#remote-host:/path/to/where/you/want/to/put/file
If you want to pull a file from a remote machine to your local machine
scp testuser#remote-host:/path/to/file/you/want/to/pull /path/on/local/machine/to/place/file
If you are logged into a remote machine and want to push a file to your local machine (assuming you have the ability to scp to the local machine in the first place)
scp /path/on/remote/machine/to/file testuser#local-host:/path/on/local/machine/to/put/file
Now, to determine what your local-host address is, you can check the IP address of your local machine or if your local machine has been provided a DNS entry, you could use it.
I.E., scp ~/myfile testuser#192.168.1.10:/home/testuser/myfile OR scp ~/myfile testuser#my-host:/home/testuser/myfile
For the DNS entry, provided you are on a correctly configured network, you would not need a fully qualified domain. Otherwise, you would need to do something like testuser#my-host.example.com:/home/testuser/myfile
Maybe you can build a solution around this:
who | grep $USER
When run on the remote computer, it should give a hint where you connected form.

Amazon Aws Question: What is the unix command to run on an EC2 Instance, that'll tell you what region the server is in?

Is there a unix command to run on an amazon ec2 instance which will return back what region & availiabity zone the current instance is in?
There exists a tool to get instance metadata - Region, Availability zone, public hostname etc.
Also there is a HTTP API you can use to find out the same information.

Resources