Use always the default ssh config - sshd

Is it possible to disallow a user specified sshd_config?
ssh -F myconfig
My background is as follows. There is a user in my system who can log in via ssh. He can also import files onto the system via scp.
Accordingly, he could also import and execute his own sshd config - with, for example, significantly greater powers - e.g. AllowTCPForwarding or AllowAgentForwarding.
How can this be prevented so that the sshd config associated with each user must always be used?

Related

I need to connect with an instance which is created in a VMware based eucalyptus cloud platform but i don't understand how to do it?

Connect to instance: i-38942195
To connect to your instance, be sure security group my-test-security-group has TCP port 22 open to inbound traffic and then perform the following steps (these instructions do not apply if you did not select a key pair when you launched this instance):
Open an SSH terminal window.
Change your directory to the one where you stored your key file my-test-keypair.pem
Run the following command to set the correct permissions for your key file:
chmod 400 my-test-keypair.pem
Connect to your instance via its public IP address by running the following command:
ssh -i my-test-keypair.pem root#192.168.0.29
Eucalyptus no longer supports VMware, but to generally troubleshoot instance connectivity you would first check that you are using a known good image such as those available via:
# python <(curl -Ls https://eucalyptus.cloud/images)
and ensure that the instance booted correctly:
# euca-get-console-output i-38942195
if that looks good (check for instance meta-data access for the SSH key) then check that the security group rules are correct, and that the instance is running using the expected security group and SSH key.
VMWare deprecation notice from version 4.1:
Support for VMWare features in Eucalyptus has been deprecated and will be removed in a future release.
http://docs.eucalyptus.cloud/eucalyptus/4.4.5/index.html#release-notes/4.1.0/4.1.0_rn_features.html
Euca2ools command:
http://docs.eucalyptus.cloud/eucalyptus/4.4.5/index.html#euca2ools-guide/euca-get-console-output.html

Add nginx config on the fly

I am building a multi-tenant application where requests to multiple domains have to be serviced by the same nginx server.
In order to achieve this, a script creates nginx configs for each domain after a registration process and adds them into a folder. The base nginx configuration has been setup to read configs from this folder.
If I manually restart nginx using sudo service nginx restart the application works fine. However, I am looking for this to happen without a manual intervention. i.e. I want my script to refresh nginx config and I want to do it without entering a sudo password again.
Can someone help me achieve this?
I would strongly discourage using service ngnix restartto reload configs, especially in a multi-tenant environment. You risk interrupting ongoing requests, sessions, etc. That's potentially fine, but each tenant had to make that determination and has to do so at appropriate times. Nginx supports the command service ngnix reload to address this concern. Reload allows for configs to be reloaded without any downtime.
You could trigger the command at least 3 ways:
Periodic cron job (easiest to setup, least efficient)
Manually triggering the command
Trigger through filesystem monitoring
Option 2 would be good if, for example, you had some web interface that allows a tenant to modify a config and you know to manually trigger the command or to send a message to some other service that triggers it. You could avoid using sudo securely by granting the web application the ability to run a single command as root e.g. vi sudo and add the line www-data ALL=(ALL) NOPASSWD: /usr/sbin/service nginx reload where www-data should be whatever user your application runs under. Then you can just execute the shell command according to whatever api is appropriate for the language you are using.
Option 3 would be the most robust. There all several options for monitoring the filesystem but I would recommend incron. Here's a guide to install and configure incron. You could monitor changes to whichever directory you store configs in and use service nginx reload in place of the example command in the tutorial.

How to encrypt docker images or source code in docker images?

Say I have a docker image, and I deployed it on some server. But I don't want other user to access this image. Is there a good way to encrypt the docker image ?
Realistically no, if a user has permission to run the docker daemon then they are going to have access to all of the images - this is due to the elevated permissions docker requires in order to run.
See the extract from the docker security guide for more info on why this is.
Docker daemon attack surface
Running containers (and applications)
with Docker implies running the Docker daemon. This daemon currently
requires root privileges, and you should therefore be aware of some
important details.
First of all, only trusted users should be allowed to control your
Docker daemon. This is a direct consequence of some powerful Docker
features. Specifically, Docker allows you to share a directory between
the Docker host and a guest container; and it allows you to do so
without limiting the access rights of the container. This means that
you can start a container where the /host directory will be the /
directory on your host; and the container will be able to alter your
host filesystem without any restriction. This is similar to how
virtualization systems allow filesystem resource sharing. Nothing
prevents you from sharing your root filesystem (or even your root
block device) with a virtual machine.
This has a strong security implication: for example, if you instrument
Docker from a web server to provision containers through an API, you
should be even more careful than usual with parameter checking, to
make sure that a malicious user cannot pass crafted parameters causing
Docker to create arbitrary containers.
For this reason, the REST API endpoint (used by the Docker CLI to
communicate with the Docker daemon) changed in Docker 0.5.2, and now
uses a UNIX socket instead of a TCP socket bound on 127.0.0.1 (the
latter being prone to cross-site request forgery attacks if you happen
to run Docker directly on your local machine, outside of a VM). You
can then use traditional UNIX permission checks to limit access to the
control socket.
You can also expose the REST API over HTTP if you explicitly decide to
do so. However, if you do that, being aware of the above mentioned
security implication, you should ensure that it will be reachable only
from a trusted network or VPN; or protected with e.g., stunnel and
client SSL certificates. You can also secure them with HTTPS and
certificates.
The daemon is also potentially vulnerable to other inputs, such as
image loading from either disk with ‘docker load’, or from the network
with ‘docker pull’. This has been a focus of improvement in the
community, especially for ‘pull’ security. While these overlap, it
should be noted that ‘docker load’ is a mechanism for backup and
restore and is not currently considered a secure mechanism for loading
images. As of Docker 1.3.2, images are now extracted in a chrooted
subprocess on Linux/Unix platforms, being the first-step in a wider
effort toward privilege separation.
Eventually, it is expected that the Docker daemon will run restricted
privileges, delegating operations well-audited sub-processes, each
with its own (very limited) scope of Linux capabilities, virtual
network setup, filesystem management, etc. That is, most likely,
pieces of the Docker engine itself will run inside of containers.
Finally, if you run Docker on a server, it is recommended to run
exclusively Docker in the server, and move all other services within
containers controlled by Docker. Of course, it is fine to keep your
favorite admin tools (probably at least an SSH server), as well as
existing monitoring/supervision processes (e.g., NRPE, collectd, etc).
Say if only some strings need to be encrypted. Could possibly encrypt this data using openssl or an alternative solution. Encryption solution should be setup inside the docker container. When building container - data is encrypted. When container is run - data is decrypted (possibly with the help of an entry using a passphrase passed from .env file). This way container can be stored safely.
I am going to play with it this week as time permits, as I am pretty curious myself.

Password-less login for SSH via SSH

I am writing a script which needs to be able to access a server with limited access. I wish to do this by going through another unix server which I do have access to and also has access to the other computer.
I can SSH into the second machine from home an not be prompted for a password, since I generated an SSH key and using ssh-copy-id. When I am physically at the second machine, I can SSH into the third in the same manner, with out being prompted for a password.
However, when I SSH into the second and then try to SSH into the third, I am prompted to the pass-pharse for the key and the password for the third computer.
Why does this happen, and how can I stop this prompt?
EDIT: To clarity some points.
I do not have root permission on either machine I am SSHing into, only my machine at home
I missed typed above (now fixed). While physically at the second machine I can SSH into the third server.
Diagram
Machine A : Home machine, root access
| SSH (passwordless)
v
Machine B : Publicly accessible server, no root permissions
| SSH (passwordless while physically logged in,
| password prompted while at machine A SSHed into machine B)
v
Machine C : Only accessable on campus
(which B is and A is not), no root permissions
You mention being able to get into the 2nd machine from the 1st, and the 1st to the 2nd. So if there is a 3rd machine in the chain you need to setup SSH keys on the 3rd machine or get the public key off of the 3rd machine you mention.
Another good tip is to manually run SSH with the verbose/debugging (-v) option so you can see exactly what is happening at each step.
-v Verbose mode. Causes ssh to print debugging messages about its
progress. This is helpful in debugging connection, authentica-
tion, and configuration problems. Multiple -v options increase
the verbosity. The maximum is 3.
This has saved me a lot of headaches in the past by showing me exactly how the login process is flowing & what exactly is clogging it up.
So if somehow your 3rd machine is named machine3.local then your ssh command using verbose mode would be:
ssh -v machine3.local
EDIT: Original poster says that he is being asked for a passphrase for a key he generated for the 3rd machine. If that’s the case, that’s the problem. You can’t have a passphrase on an SSH key if you want passwordless access.
ANOTHER EDIT: Also, be sure they have permissions that match the following & are owned by the account trying to access like this example shows:
-rw------- [username] [usergroup] authorized_keys
-rw------- [username] [usergroup] id_rsa
-rw-r--r-- [username] [usergroup] id_rsa.pub
-rw-r--r-- [username] [usergroup] known_hosts
Just check for the second computer whether you are logged in as root.
If so then you have to create ssh keys as you did for the user.
Basically, when you are logged in as root again you have to create ssh keys even if you have created it for user.

Amazon EC2 How Do I Host My Own Content on a Bitnami-Wordpress Instance

I created an instance to host my wordpress blog. I made a keypair, converted it using PuTTY Gen so that it would work with winscp.
My security group that is associated with my instance has:
ICMP Allow All
TCP 0-65535
TCP 22 (SSH)
TCP 80 (HTTP)
TCP 443 (HTTPS)
UDP 0-65535
I am running a Bitnami-Wordpress 3.2.1-0 Ubuntu AMI
My Question is: How do I host a simple file on my instance?
UPDATE: so I was able to login using SFTP by simply filling in my instance Public DNS as my host, and the PuTTY Gen key as the private key, the username I had to use was Bitnami. So now I have access to the server, how or where do I put a file so that it will come out www.mywebsite.com/myfile.file???
I am assuming that I need to SSH into the server using putty, and add it into the WWW directoroy?
What I have tried:
I tried logging in using WinSCP with host name being my instance's Public DNS, and my private key file the converted PuTTY Gen file that was originally the key pair for the instance.
Using SFTP, pressing login it asks me for a user name, entering "user" or "ec2-user" I get an error saying:
"disconnected, no supported authentication methods available (server sent: public key), Server >refused our key. Authentication failed.
Using root for the username, it asks for a passphrase that I created for my keypair using PuTTY Gen, It accepts it, but then I get this error:
"Received too large (1349281121 B) SFTP packet. Max supported packet size is 1024000 B. The error >is typically caused by message printed from startup script (like .profile). The message may start >with ""Plea"". Cannot initialize SFTP protocol. Is the host running a SFTP server?
If in WinSCP I put the username as "user" and the password as "bitnami" (before I press login) (default wordpress password for bitnami AMI) it gives me this error:
Disconnected: No supported authentication methods available (server sent: publickey). Authentication log (see session log for details):Using username: "user". Server refused ourkey. Authentication failed.
I get the same errors using SCP instead of SFTP in WinSCP except when I use SCP and I press login, and I use username "root" it asks me for my passphrase, after entering that I get this error:
Connection has been unexpectedly closed. Server sent command exit status 0. Error skipping startup message. Your shell is probably incompatible with the application (BASH is recommended).
Also, if you want to remove wordpress from the URL, you can use the following instructions I posted on my blog (travisnelson.net):
$ sudo chmod 777 /opt/bitnami/apache2/conf/httpd.conf
$ vi /opt/bitnami/apache2/conf/httpd.conf
changed DocumentRoot to be: DocumentRoot “/opt/bitnami/apps/wordpress/htdocs”
$ sudo chmod 544 /opt/bitnami/apache2/conf/httpd.conf
$ sudo apachectl -k restart
Then in WordPress, change the Site address (URL) in General Settings to not have /wordpress.
Hope this helps
If you are already able to connect using SFTP. Now you just need to copy the file. Where you need to copy it depend on what you are trying to do.
BitNami Wordpress AMI has the following directory structure (I only include the relevant directories for this question):
/opt/bitnami
|
|-- apache2/htdocs
|-- apps/wordpress/htdocs
You mentioned that you want to www.mywebsite.com/myfile.file. If you didn't modify the default apache configuration you will need to copy file in /opt/bitnami/apache2/htdocs (this is the DocumentRoot for the BitNami WordPress AMI.
If you want that file to be accessed from www.mywebsite.com/wordpress/myfile.file, then you need to copy it in /opt/bitnami/apps/wordpress/htdocs.
If what you are trying to do is to manually install a theme or plugin you can follow the WordPress documentation taking into account that the wordpress installation directory is /opt/bitnami/apps/wordpress/htdocs.
Also, you can find below some links to the BitNami Wiki explaining how to connect to the AMIs. I just include them as a reference for other users that find the same connection issues.
Further reading:
How to connect to your amazon instance
How to upload files from Windows
I had a similar problem recently. Having setup Bitnami Wordpress on AmazonAWS I was unable to modify, add, or remove themes from within the Wordpress admin interface even though all of my permissions were setup appropriately according to Wordpress recommended settings. However, I did not want to have to resort to turning FTP access on.
I was able to resolve the issue by:
Setting the file access method for Bitnami Wordpress to 'direct'.
Changing all users to Apache Bitnami.
Adding Bitnami to Apache group and Apache to Bitnami group.

Resources