Go from AWS root to ec2-user - unix

Using unix, I'm able to ssh successfully into my instance using the following:
ssh -i ~/.ssh/seis_v3.pem root#52.5.233.876
However, when I try to login as ec2-user instead of root, I get denied. Is there a way once in root#52.5.233.876 to change directory or permissions to get onto ec2-user? Thank you.

ami-728ace1a corresponds to the this AMI : suse-sles-11-sp3-sapcal-v20150127-hvm-mag-x86_64. It's an image for SUSE Linux Enterprise Server 11. I just launched an instance using this AMI and when I tried to ssh in as root:
$ ssh -i id_rsa root#54.158.122.xxx
Please login as the user "ec2-user" rather than the user "root".
So I tried to log in as the ec2-user:
$ ssh -i id_rsa ec2-user#54.158.122.xxx
Last login: Mon Apr 13 01:35:47 2015 from xxx.xxx.xxx.xxx
SUSE Linux Enterprise Server 11 SP3 x86_64 (64-bit)
You mentioned being able to successfully log in as root. Do you get the message that I got above? If not then are you sure you provided the correct AMI ID for this instance? Did you make any changes to the root account and/or any other accounts on this system after launching it? Do you see an ec2-user listed in /etc/passwd?

Related

Dokku - /home/dokku/.sshcommand: No such file or directory

I've installed Dokku on a VPS running CentOS7. When I 'git push dokku master' I'm getting...
git remote set-url dokku dokku#mydomain.com:trial
git push dokku master
cat: /home/dokku/.sshcommand: No such file or directory
fatal: ''trial'' does not appear to be a git repository
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
When I try and ssh in I also get the same error...
ssh dokku#mydomain.com
cat: /home/dokku/.sshcommand: No such file or directory
Connection to mydomain.com closed.
cat /var/log/secure ...
Nov 7 10:06:29 Callisto sshd[19912]: Accepted publickey for dokku from xxx.xxx.xxx.xxx port 50002 ssh2: RSA SHA256:Y0ueDcZEJWQd9H3FsetReYTDPwJPob6zm9p4Dpt4fOE
Nov 7 10:06:29 Callisto sshd[19912]: pam_unix(sshd:session): session opened for user dokku by (uid=0)
Nov 7 10:06:29 Callisto sshd[19914]: Received disconnect from xxx.xxx.xxx.xxx port 50002:11: disconnected by user
Nov 7 10:06:29 Callisto sshd[19914]: Disconnected from xxx.xxx.xxx.xxx port 50002
Nov 7 10:06:29 Callisto sshd[19912]: pam_unix(sshd:session): session closed for user dokku
Prior to pushing I'm creating the app on the server...
dokku apps:create trial
To add my public ssh key to server I used dokku ssh-keys:add dokku id_rsa.pub
Looking at another answer here it seems that I am in fact missing .sshcommand in /home/dokku/ . Any ideas on how to fix this or what could have gone wrong? This has been driving me crazy the last couple of days.
You must have deleted it at some point. Run the following commands to set everything up again:
echo '/usr/bin/dokku' > /home/dokku/.sshcommand
chmod 0644 /home/dokku/.sshcommand
chown dokku:root /home/dokku/.sshcommand

SFTP not working

I have a CentOS server where I have installed the vsftpd service, however I am getting the error
bash: sftp: command not found
Even the which sftp command can't find this service.
Detailed steps below :
As root:
yum install vsftpd
Total download size: 139 k
Is this ok [y/N]: **y**
Configure:
vi /etc/vsftpd/vsftpd.conf
Change anonymous_enable=YES to anonymous_enable=NO
Add userlist_deny=NO after userlist_enable
Add allowed users:
vi /etc/vsftpd/user_list
Replace contents with:
vsftpd userlist
userlist_deny=NO so only allow users in this file
user
Turn on Vsftpd service
chkconfig vsftpd on
Start the service
service vsftpd start
Can someone help figuring out what I'm doing wrong ?
sftp binary is provided by the openssh-clients package. Install that before:
yum install openssh-clients
then you can run sftp.
Assuming the vsftpd daemon is now running and can get through any firewall you have, you need to use an ftp client to connect to the server.
yum install ftp
ftp x.x.x.x <-- IP address of server
That will show that it is working. Remotely you will need a client such as Filezilla.

Is the editor Atom able to open projects on a remote server?

Atom is able to open a project, and to show the whole tree of the project on the left side, a really nice feature.
Now I'm using SSH on Host OS to access a Guest OS (say Red Hat Enterprise Linux, RHEL) on Virtualbox, is there a way of Atom located in Host OS to open a project located on RHEL?
Well yes there is!
You just need to configure sshfs, optionally with autofs. Then you can access the files as if they are stored locally. I've used this with Atom and it works seamlessly.
Instructions for Ubuntu
Install sshfs
$ sudo apt-get install sshfs
Mount the remote directory on a local mountpoint
$ sshfs [user#]host:[dir] mountpoint
Combining it with autofs
The following link has instructions for a setup using autofs.
Note: This requires you to setup SSH for the root user.
http://www.mccambridge.org/blog/2007/05/totally-seamless-sshfs-under-linux-using-fuse-and-autofs/
Additionally to that post, I've added some tricks for an even more seamless experience.
Enhance performance
I've noticed a significant performance boost by adding this SSH config to /root/.ssh/config:
Ciphers arcfour
Compression no
Note: This does make the connection less secure.
Make it appear as a disk
If you set the mount point to a directory in /media, the mount point will show up as a disk in your file browser. For example /media/sshfs.
I would recommend the Remote sync plugin for this. I have a python environment set up on a linux box and i connect to it from my PC.
It allows me to upload changes automatically when i save a file and also define files to be monitored for changes.
Not 100% what you're looking for, but there's the Remote-Edit package: https://atom.io/packages/remote-edit
This will allow you to define the connection parameters for the server, and will then allow you to browse and edit the files found on the server.
Complement to Remco's sshfs answer above:
If you use different users in the client and server hosts, consider using the 'idmap' option of sshfs.
I use different users in my working host and in the development or testing VMs.
Example:
using option '-o idmap=user' will automatically translate UID/GID of the remote host to the UID/GID of the connecting user in the local host
Files owned by remote user (devuser) in remote host (devhost1) will appear as belonging to the connecting user (locuser) in local host (clienthost)
locuser#clienthost:~$ sshfs devuser#devhost1:/var/www ~/dev/www -o idmap=user
locuser#clienthost:~$ ls -lR ~/dev/www
(...)
-rw-rw-r-- 1 locuser locuser 269 abr 1 11:37 index.html
-rw-rw-r-- 1 locuser locuser 249 abr 3 03:59 page1.html
-rw-rw-r-- 1 locuser locuser 1118 abr 2 15:07 page2.html
-rw-rw-r-- 1 locuser locuser 847 abr 3 03:20 page3.html
(...)
The mapping can also be made explicit (userx <-> usery). For more details see man sshfs
I am writing this answer because none of the other answers worked for me.
Mounting as a directory & browsing with atom (#Remco Haszing answer) was a brilliant one.
but in my case, atom wants to index all of the remote project & its a heavy one. and it gets not responding.
using remote-sync package was good when you working locally then want to upload the files to server.
Actually the remote-edit is the package meant to do this job. (editing files remotely on ssh)
the problem with this is, it has been abandon.
These help me as its replacements:
https://atom.io/packages/remote-edit-ni
https://atom.io/packages/remote-editor

salt-ssh permission denied when attempting to log into remote system

I am new to salt-ssh and I have gotten it to work successfully for setting up a remote system. However, I have a login issue that I don't know how to address. What is happening is that when I try to run the salt-ssh commands I have to fight with then initial login process before eventually it just works. I am looking to see if I can narrow down what is causing me to have to fight with login process.
I am using OS X to run my salt-ssh commands against an ubuntu vagrant vm.
I have added my root user's ssh key to the root user authorized_keys on the vagrant vm. I have verified that I can log into the system using ssh without any issues
sudo ssh root#192.168.33.10
Here are what my config files look like:
roster
managed:
host: 192.168.33.10
user: root
sudo: true
Saltfile
salt-ssh:
config_dir: /users/vmcilwain/projects/salt-ssh-rails
roster_file: /users/vmcilwain/projects/salt-ssh-rails/roster
log_file: /users/vmcilwain/projects/salt-ssh-rails/saltlog.txt
master
file_roots:
base:
- /users/vmcilwain/projects/salt-ssh-rails/states
pillar_roots:
base:
- /users/vmcilwain/projects/salt-ssh-rails/pillars
I run this command:
sudo salt-ssh -i '*' test.ping
I enter my local user's password and I get this output
Permission denied for host 192.168.33.10, do you want to deploy the salt-ssh key? (password required):
[Y/n]
This is where my fight is. If the vagrant vm has the ssh key for the user I am executing salt-ssh as, why am I being told that permission is denied? Especially when I verified I could ssh into the system without using salt-ssh.
Clicking yes prompts me for the remote root user's password, which I didn't set and don't necessarily want to since an ssh key should have worked.
I'm hoping someone can tell me the best way to setup connections between both systems so that I don't have to have this fight every time.
I needed to set the priv in my roster to the rsa key that I am using to connect to the remote host:
priv: /Users/vmcilwain/.ssh/id_rsa

Unable to execute MPICH2 on multiple machines on ubuntu 12.04 (HYDU_sock_connect issue)

I am facing difficulty in executing MPI program on two machines. The OS is Ubuntu 12.04. And the MPI implementation is MPICH2
ssh is working fine:
root#ubuntu:/home# ssh 192.168.1.9
root#gpuguy's password:
Welcome to Ubuntu 12.04.3 LTS (GNU/Linux 3.8.0-29-generic i686)
* Documentation: https://help.ubuntu.com/
131 packages can be updated.
67 updates are security updates.
Last login: Thu Oct 24 17:36:25 2013 from ubuntu.local
root#gpuguy:~#
But when I run my MPI programs it fails:
root#ubuntu:/home# mpiexec -f hosts.cfg -n 4 hello
root#192.168.1.9's password:
[proxy:0:0#gpuguy] HYDU_sock_connect (./utils/sock/sock.c:171): unable to get host address for ubuntu (1)
[proxy:0:0#gpuguy] main (./pm/pmiserv/pmip.c:209): unable to connect to server ubuntu at port 42104 (check for firewalls!)
I have already disabled firewall on both machines that is the reason I can do ssh successfully. But how to solve this issue?
My MPI code runs successfully on single machine.
For MPICH (or any MPI implementation) to work, you need to have passwordless SSH set up. I should also mention that you really shouldn't have to be logged in as root to make this work. It's generally a very bad idea to be logged in as root all of the time.
In /etc/hosts file, add ip address of each server and its hostname.
You should do this for all the servers.
for example:
10.10.0.5 server1
10.10.0.6 server2
10.10.0.7 server3
Just check in /etc/hosts file, not use tab (\t) instead of space to separate between ip address and hostname.
This is wrong:
10.10.0.5 \t server1
This is true:
10.10.0.5 server1
Be careful to not delete or modify existed lines in /etc/hosts file. only add new lines at end of file.
Also, you do not need to disable firewall to fix this issue.

Resources