I'd like to allow a user to set up an SSH tunnel to a particular machine on a particular port (say, 5000), but I want to restrict this user as much as possible. (Authentication will be with public/private keypair).
I know I need to edit the relevant ~/.ssh/authorized_keys file, but I'm not sure exactly what content to put in there (other than the public key).
On Ubuntu 11.10, I found I could block ssh commands, sent with and without -T, and block scp copying, while allowing port forwarding to go through.
Specifically I have a redis-server on "somehost" bound to localhost:6379 that I wish to share securely via ssh tunnels to other hosts that have a keyfile and will ssh in with:
$ ssh -i keyfile.rsa -T -N -L 16379:localhost:6379 someuser#somehost
This will cause the redis-server, "localhost" port 6379 on "somehost" to appear locally on the host executing the ssh command, remapped to "localhost" port 16379.
On the remote "somehost" Here is what I used for authorized_keys:
cat .ssh/authorized_keys (portions redacted)
no-pty,no-X11-forwarding,permitopen="localhost:6379",command="/bin/echo do-not-send-commands" ssh-rsa rsa-public-key-code-goes-here keyuser#keyhost
The no-pty trips up most ssh attempts that want to open a terminal.
The permitopen explains what ports are allowed to be forwarded, in this case port 6379 the redis-server port I wanted to forward.
The command="/bin/echo do-not-send-commands" echoes back "do-not-send-commands" if someone or something does manage to send commands to the host via ssh -T or otherwise.
From a recent Ubuntu man sshd, authorized_keys / command is described as follows:
command="command"
Specifies that the command is executed whenever this key is used
for authentication. The command supplied by the user (if any) is
ignored.
Attempts to use scp secure file copying will also fail with an echo of "do-not-send-commands" I've found sftp also fails with this configuration.
I think the restricted shell suggestion, made in some previous answers, is also a good idea.
Also, I would agree that everything detailed here could be determined from reading "man sshd" and searching therein for "authorized_keys"
You'll probably want to set the user's shell to the restricted shell. Unset the PATH variable in the user's ~/.bashrc or ~/.bash_profile, and they won't be able to execute any commands. Later on, if you decide you want to allow the user(s) to execute a limited set of commands, like less or tail for instance, then you can copy the allowed commands to a separate directory (such as /home/restricted-commands) and update the PATH to point to that directory.
Besides authorized_keys option like no-X11-forwarding, there actually is exactly one you are asking for: permitopen="host:port". By using this option, the user may only set up a tunnel to the specified host and port.
For the details of the AUTHORIZED_KEYS file format refer to man sshd.
My solution is to provide the user who only may be tunneling, without an interactive shell, to set that shell in /etc/passwd to /usr/bin/tunnel_shell.
Just create the executable file /usr/bin/tunnel_shell with an infinite loop.
#!/bin/bash
trap '' 2 20 24
clear
echo -e "\r\n\033[32mSSH tunnel started, shell disabled by the system administrator\r\n"
while [ true ] ; do
sleep 1000
done
exit 0
Fully explained here: http://blog.flowl.info/2011/ssh-tunnel-group-only-and-no-shell-please/
Here you have a nice post that I found useful:
http://www.ab-weblog.com/en/creating-a-restricted-ssh-user-for-ssh-tunneling-only/
The idea is: (with the new restricted username as "sshtunnel")
useradd sshtunnel -m -d /home/sshtunnel -s /bin/rbash
passwd sshtunnel
Note that we use rbash (restricted-bash) to restrict what the user can do: the user cannot cd (change directory) and cannot set any environment variables.
Then we edit the user's PATH env variable in /home/sshtunnel/.profile to nothing - a trick that will make bash not find any commands to execute:
PATH=""
Finally we disallow the user to edit any files by setting the following permissions:
chmod 555 /home/sshtunnel/
cd /home/sshtunnel/
chmod 444 .bash_logout .bashrc .profile
I'm able to set up the authorized_keys file with the public key to log
in. What I'm not sure about is the additional information I need to
restrict what that account is allowed to do. For example, I know I can
put commands such as:
no-pty,no-port-forwarding,no-X11-forwarding,no-agent-forwarding
You would want a line in your authorized_keys file that looks like this.
permitopen="host.domain.tld:443",no-pty,no-agent-forwarding,no-X11-forwardi
ng,command="/bin/noshell.sh" ssh-rsa AAAAB3NzaC.......wCUw== zoredache
If you want to do allow access only for a specific command -- like svn -- you can also specify that command in the authorized keys file:
command="svnserve -t",no-port-forwarding,no-pty,no-agent-forwarding,no-X11-forwarding [KEY TYPE] [KEY] [KEY COMMENT]
From http://svn.apache.org/repos/asf/subversion/trunk/notes/ssh-tricks
I made a C program which looks like this:
#include <stdio.h>
#include <unistd.h>
#include <signal.h>
#include <stdlib.h>
void sig_handler(int signo)
{
if (signo == SIGHUP)
exit(0);
}
int main()
{
signal(SIGINT, &sig_handler);
signal(SIGTSTP, &sig_handler);
printf("OK\n");
while(1)
sleep(1);
exit(0);
}
I set the restricted user's shell to this program.
I don't think the restricted user can execute anything, even if they do ssh server command, because the commands are executed using the shell, and this shell does not execute anything.
See this post on authenticating public keys.
The two main things you need to remember are:
Make sure you chmod 700 ~/.ssh
Append the public key block to authorized-keys
You will generate a key on the users machine via whatever ssh client they are using. pUTTY for example has a utility to do this exact thing. It will generate both a private and public key.
The contents of the public key file generated will be placed in the authorized_keys file.
Next you need to make sure that the ssh client is configured to use the private key that generated the public key. It's fairly straight forward, but slightly different depending on the client being used.
Related
How do I get root access to my Google VM instance, and also how can I log into my VM Instance from my PC with a SSH client such as putty?
I would also like to add that I have tried to do sudo for things that need root access to do those things, such as yum or wget. But it does not allow me to do sudo, it asks me for the root password but I do not know how, or where I would be able to get the root password.
You can become root via sudo su. No password is required.
How do I use sudo to execute commands as root?
(splitting this off from the other answer since there are multiple questions within this post)
Once you connect to your GCE VM using PuTTY or gcloud compute instances ssh or even clicking on the "SSH" button on the Developers Console next to the instance, you should be able to use the sudo command. Note that you shouldn't be using the su command to become root, just run:
sudo [command]
and it should not prompt you for a password.
If you want to get a root shell to run several commands as root and you want to avoid prefixing all commands with sudo, run:
sudo su -
If you're still having issues, please post a new question with the exact command you're running and the output that you see.
sudo su root <enter key>
No password required :)
if you want to connect your gce (google-cloud) server with putty using root, here is the flow:
use puttygen to generate two ppk files:
for your gce-default-user
for root
do the followings on putty (replace gce-default-user with your gce username):
Putty->session->Connection->data->Auto-login username: gce-default-user
Putty->session->Connection->SSH->Auth->Private-key for authentication: gce-default-user.ppk
Then connect to server using your gce-default-user
make the following changes in sshd_config
sudo su
nano /etc/ssh/sshd_config
PermitRootLogin yes
UsePAM no
Save+exit
service sshd restart
Putty->session->Connection->data->Auto-login username: root
Putty->session->Connection->SSH->Auth->Private-key for authentication: root-gce.ppk
Now ou can login to root via putty.
If you need to use eclipse remote system and log-in as root:
Eclipse->windows->preferences->General->network Connection->SSH2->private-keys:
root-gce.ppk
Please try sudo su - on GCE.
By default on GCE, there is no password required to sudo (do as a substitute user). The - argument to su (substitute user) further simulates a full login, taking the target user (the default user for both is root) configured login shell and its profile scripts to set new environment parameters. You'll at least notice the prompt change from ending in $ to # in any case.
JUST GOT TO CLOUD SHELL BY CLICKING SSH
AND FOLLOW PASSWORD CHANGE COMMAND FOR ROOT USER USING SUDO :)
sudo passwd
and it will change the root password :)
then to becom root use command
su
type your password and become a root :)
How do I connect to my GCE instance using PuTTY?
(splitting this off from the other answer since there are multiple questions within this post)
Take a look at setting up ssh keys in the GCE documentation which shows how to do it; here's the summary but read the doc for additional notes:
Generate your keys using ssh-keygen or PuTTYgen for Windows, if you haven't already.
Copy the contents of your public key. If you just generated this key, it can probably be found in a file named id_rsa.pub.
Log in to the Developers Console.
In the navigation, Compute->Compute Engine->Metadata.
Click the SSH Keys tab.
Click the Edit button.
In the empty input box at the bottom of the list, enter the corresponding public key, in the following format:
<protocol> <public-key> username#example.com
This makes your public key automatically available to all of your instances in that project. To add multiple keys, list each key on a new line.
Click Done to save your changes.
It can take several minutes before the key is inserted into the instance. Try connecting with ssh to your instance. If it is successful, your key has been propagated to the instance.
I read on man sshd one can add post-login processing when a user logs in using a particular key:
environment="FOO=BAR" ssh-rsa AAA... keytag
But when I try to ssh into the system, the target host does not register the line and instead asks for a password. What is the right way of adding this? I would like to do something like
command="echo|mail -s ${USER},${HOSTNAME} a.monitored.email#example.com" ssh-rsa AAA... keytag
I am using Suse SLE 11 SP2.
Thanks
Dinesh
First, according to the documentation command = "command":
That specifies the command is executed Whenever This key is used for authentication. The command supplied by the user (if any) is ignored. The command is run on a pty if the client requests a pty; Otherwise it is run without a tty. If an 8-bit clean channel is required, one must not request a pty or specify no-pty Should. A quote May be included in the command by quoting it with a backslash. This option might be useful to restrict Un certain public keys to perform just a specific operation. An example might be a key That Permits remote backups but nothing else. Note That May specify the client TCP and / or X11 forwarding Explicitly UNLESS they 'are prohibited. The command originally supplied by the client is available in the SSH_ORIGINAL_COMMAND environment variable. Note That This option Applies to shell, command or subsystem execution. Also note This command That May be superseded by Either a sshd_config (5) ForceCommand directive or a command embedded in a certificate.
Using this option, it is possible to enforce execution of a given command when this key is used for authentication and no other.This is not what you're looking for.
To run a command after login you can add in the file ~/bashrc something like this:
if [[ -n $SSH_CONNECTION ]] ; then
echo|mail -s ${USER},${HOSTNAME} a.monitored.email#example.com"
fi
Second, you need to verify the permissions of the authorized_keys file and the folder / parent folders in which it is located.
chmod 700 ~/.ssh
chmod 600 ~/.ssh/authorized_keys
For more information see: https://www.complang.tuwien.ac.at/doc/openssh-server/faq.html#3.14
I want to transfer a .png file from a directory on my computer to a directory on a remote server.
I have to use SFTP to secure the file and transfer mode. And I already have a UNIX script (.ksh) file to copy the files in the normal mode. How do I implement the transfer in SFTP mode?
Use sftp instead of whatever command you are using in your .ksh script. See sftp man for reference.
You may also want to look at scp secure copy - scp man.
EDIT
sftp is mostly for interactive operations, you need to specify host you want to connect to:
sftp example.com
you will be prompted for username and passsword, and the interactive session will begin..
Although it can be used in scripts, the scp is much more easy to use:
scp /path/to/localfile user#host:/path/to/dest
you will be prompted for password..
Edit 2
Both scp and sftp use ssh as underlying protocol, see this and this
The best way to setup them to run from scripts is to setup passwordless authentication using keys. See this and this. I use this extensively on my servers.. After you setup keys, you can run
scp -i private-key-file /path/to/local/file user#host:/path/to/remote
sftp -oIdentityFile=private-key-file -b batch-file user#host
If you want to authenticate with password, you may try the expect package. The simplest script may look like this:
#!/usr/bin/expect
spawn sftp -b batch-file user#host
expect "*?assword:*"
send "pasword\n"
interact
See this, this and this for more info.
Send commands through sftp on one line:
Make a file and save it as my_batch_file:
cd /root
get blah.txt
bye
Run this to execute your batch file:
eric#dev /home/el $ sftp root#10.30.25.15 < my_batch_file
Connecting to 10.30.25.15...
Password:
sftp> cd /root
sftp> get blah.txt
Fetching /root/blah.txt to blah.txt
sftp> bye
The file is transferred
That moved the blah.txt from remote computer to local computer.
If you don't want to specify a password, do this:
How to run the sftp command with a password from Bash script?
Or if you want to do it the hacky insecure way, use bash and expect:
#!/bin/bash
expect -c "
spawn sftp username#your_host
expect \"Password\"
send \"your_password_here\r\"
interact "
You may need to install expect, change the wording of 'Password' to lowercase 'p' to match what your prompt receives. The problems here is that it exposes your password in plain text in the file as well as in the command history. Which nearly defeats the purpose of having a password in the first place.
I want to store --password-file option that comes with rsync. I don't want to use ssh public_private key encryption. I have tried this command:
rsync -avz --progress --password-file=pass.txt source destination
This says:
The --password-file option may only be used when accessing an rsync daemon.
So, I tried using:
rsync -avz --progress --password-file=pass.txt source destination rsyncd --daemon
But this return various errors like unknown options. Is my sytanx correct? How do I setup rsync daemon in my Debian machine.
That is correct,
--password-file is only applicable when connecting to a rsync daemon.
You probably haven't set it in the daemon itself though, the password you set and the one you use during that call must match.
Edit /etc/rsyncd.secrets, and set the owner/group of that file to root:root with world reading permissions.
#/etc/rsyncd.secrets
root:YourSecretestPassword
To connect to a rsync daemon, use a double colon followed by the module name, and the file or folder to synchronize (instead of a colon when using SSH),
RSYNC_PASSWORD="YourSecretestPassword"; rsync -rtv user#remotehost::module/source/ destination/
NOTE:
this implies abdicating SSH encryption, though the password itself is not sent across the network in plain text, your data is ...
this is already insecure as is, never as the the same password as any of your users account.
For a better understanding of its inner workings (how to give specific IPs/processes the ability to upload to specified areas of the filesystem without the need for a user account): http://transamrit.net/docs/rsync/
After trying a while, I got this to work. Since Im copying from my live server (and routers data) to my local server in my laptop as backup user no problem with password been unencrypted, its secured wired on my laptop at home. First you need to install sshpass if Centos with yum install sshpass then create a user backup and assign a temp password. I listed the -p option in case your ssh port is different than default.
sshpass -p 'password' rsync -vaurP -e 'ssh -p 2222' backup#???.your.ip.???:/somedir/public_data/temp/ /your/localdata/temp
Understand SSH RSA is a better permanente alternative and all that, but this is a quick alternative to backup and restore on the go. It works if you are not too concern about security but more concern about your data been backup locally as in an emergency o data recovery. Your user backup password you can change it once the backup is completed. Its a lot faster to setup when your servers change IPs, users, and its in constant modifications (as routers change config and non static IPs, also when routers are not local and you are backing up clients servers locally, where you dont have always access to do SSH. Some of my clients dont even have SSH installed and they dont want to hassle with creating public keys. On some servers only where you have access on a temporary basis. By the way, if you want to do the restore, just reverse the case. Dont need change much, from the same command shell you can do it reversing the order of target and source directories, and creating another backup user with same temp password on the target. After finish, you delete the backup user or change its passwords on target and/or source servers. You can protect even further, as I have done, replacing the password for a one line file using a bash script for multi server environment. Alternative is to use the -f option so the password does not show in the bash history -f "/path/to/passwordfile" Regards
NOTE: If you want to update only modified files then you should use this parameters -h -v -r -P -t as described here https://unix.stackexchange.com/questions/67539/how-to-rsync-only-new-files
rsync -arv -e \
"sshpass -f '/your/pass.txt' ssh -o StrictHostKeyChecking=no" \
--progress /your/source id#IP:/your/destination
Maybe you have to install "sshpass" if you not.
I have a windows shared folder named \\mymachine\sf and I want to map it as a ubuntu device. I use smbmount command as below:
smbmount //mymachine/sf /mnt/sf -o <username>
The output is like
retrying with upper case share name
mount error(6): No such device or address
Refer to the mount.cifs(8) manual page (e.g. man mount.cifs)
I'm sure the device exists and mymachine is ping'ed through.
Any idea?
Double check that the share exists and is the name you expect with:
smbclient -L //mymachine -U <username>
Also double check that the directory your share points to (as mentioned in smb.conf) actually exists on the server/host. This is one situation where you will receive that error, despite smbclient -L //hostname giving reasonable output.
Make sure that the directory the samba share points to exists on the server side as well (might have been deleted or mount might have failed at boot). smbclient -L //mymachine -U <username> lists shares as available even though they're not available!