I can run the following command to accomplish what I am trying to do, however I would like to setup entries in my ~/.ssh/config to handle a transparent jump:
ssh -tt login.domain.org gsissh -tt -p 2222 remote.behind.wall.domain.org
Note that the second hop MUST be made with gsissh, some info can be found here: http://toolkit.globus.org/toolkit/docs/5.0/5.0.4/security/openssh/pi/
AFAIK this precludes the standard use of netcat or the -W flag in the ProxyCommand option in the .ssh/config. I think this is because ssh will try to use ssh instead of gsissh on the intermediate machine.
If I put something like this in my .ssh/config it will hop through to the target machine, but when I exit I will land in a shell on the intermediate machine and it borks my ControlMaster setup—the next time I try to ssh to the final destination I end up on the intermediate machine
Host dest
HostName login.domain.org
PermitLocalCommand yes
LocalCommand gsissh -p 2222 remote.behind.wall.domain.org
Also, it seems that trickery using -L or -R is disabled for security reasons.
I would love some help if anybody has any tips.
Thanks
Related
I am trying to run rsync as follows and running into error sshpass: Failed to run command: No such file or directory .I verified the source /local/mnt/workspace/common/sectool and destination directories/prj/qct/wlan_rome_su_builds are available and accessible?what am I missing?how to fix this?
username#xxx-machine-02:~$ sshpass –p 'password' rsync –progress –avz –e ssh /local/mnt/workspace/common/sectool cnssbldsw#hydwclnxbld4:/prj/qct/wlan_rome_su_builds
sshpass: Failed to run command: No such file or directory
Would that be possible for you to check whether 'rsync' works without 'sshpass'?
Also, check whether the ports used by rsync is enabled. You can find the port info via cat /etc/services | grep rsync
The first thing is to make sure that ssh connection is working smoothly. You can check this via "sudo ssh -vvv cnssbldsw#hydwclnxbld4" (please post the message). In advance, If you are to receive any messages such as "ssh: connect to host hydwclnxbld4 port 22: Connection refused", the issue is with the openssh-server (not being installed or a broken package). Let's see what you get you get for the first command
i was created a bash script my_vp.sh that use 2 command:
setterm -cursor off
setterm -powersave off
[...]
#execute video commands
[...]
and is in a computerA
but when i execute it by ssh by another computerB_terminal:
ssh pi#192.168.1.1
execute video commands work correctly in the computerA (the same where is the script)
but the command setterm works in the computerB (the terminal where i execute the ssh command).
somebody can help me with solucione it?
thank you very much!
I am not sure I understood the question:
to execute a local script, but on another machine:
scp /path/to/local/script.bash pi#192.168.1.1:/tmp/copy_of_script.bash
and then, if it's copied correctly, execute it:
ssh pi#192.168.1.1 "chmod +x /tmp/copy_of_script.bash"
ssh pi#192.168.1.1 "bash /tmp/copy_of_script.bash"
to have the remote video (Xwindows, etc) commands appear on the originating machine:
replace : ssh with : ssh -x (to allow X-Forwarding, which will allocate a DISPLAY automatically on the remote machine that will be tunneled back to the originating machine)
for the X-forwarding to work, there are some requirements (usually ok by default, but ymmv) : read more about those requirements in this Unix.se answer
I want to store --password-file option that comes with rsync. I don't want to use ssh public_private key encryption. I have tried this command:
rsync -avz --progress --password-file=pass.txt source destination
This says:
The --password-file option may only be used when accessing an rsync daemon.
So, I tried using:
rsync -avz --progress --password-file=pass.txt source destination rsyncd --daemon
But this return various errors like unknown options. Is my sytanx correct? How do I setup rsync daemon in my Debian machine.
That is correct,
--password-file is only applicable when connecting to a rsync daemon.
You probably haven't set it in the daemon itself though, the password you set and the one you use during that call must match.
Edit /etc/rsyncd.secrets, and set the owner/group of that file to root:root with world reading permissions.
#/etc/rsyncd.secrets
root:YourSecretestPassword
To connect to a rsync daemon, use a double colon followed by the module name, and the file or folder to synchronize (instead of a colon when using SSH),
RSYNC_PASSWORD="YourSecretestPassword"; rsync -rtv user#remotehost::module/source/ destination/
NOTE:
this implies abdicating SSH encryption, though the password itself is not sent across the network in plain text, your data is ...
this is already insecure as is, never as the the same password as any of your users account.
For a better understanding of its inner workings (how to give specific IPs/processes the ability to upload to specified areas of the filesystem without the need for a user account): http://transamrit.net/docs/rsync/
After trying a while, I got this to work. Since Im copying from my live server (and routers data) to my local server in my laptop as backup user no problem with password been unencrypted, its secured wired on my laptop at home. First you need to install sshpass if Centos with yum install sshpass then create a user backup and assign a temp password. I listed the -p option in case your ssh port is different than default.
sshpass -p 'password' rsync -vaurP -e 'ssh -p 2222' backup#???.your.ip.???:/somedir/public_data/temp/ /your/localdata/temp
Understand SSH RSA is a better permanente alternative and all that, but this is a quick alternative to backup and restore on the go. It works if you are not too concern about security but more concern about your data been backup locally as in an emergency o data recovery. Your user backup password you can change it once the backup is completed. Its a lot faster to setup when your servers change IPs, users, and its in constant modifications (as routers change config and non static IPs, also when routers are not local and you are backing up clients servers locally, where you dont have always access to do SSH. Some of my clients dont even have SSH installed and they dont want to hassle with creating public keys. On some servers only where you have access on a temporary basis. By the way, if you want to do the restore, just reverse the case. Dont need change much, from the same command shell you can do it reversing the order of target and source directories, and creating another backup user with same temp password on the target. After finish, you delete the backup user or change its passwords on target and/or source servers. You can protect even further, as I have done, replacing the password for a one line file using a bash script for multi server environment. Alternative is to use the -f option so the password does not show in the bash history -f "/path/to/passwordfile" Regards
NOTE: If you want to update only modified files then you should use this parameters -h -v -r -P -t as described here https://unix.stackexchange.com/questions/67539/how-to-rsync-only-new-files
rsync -arv -e \
"sshpass -f '/your/pass.txt' ssh -o StrictHostKeyChecking=no" \
--progress /your/source id#IP:/your/destination
Maybe you have to install "sshpass" if you not.
I'm new to unix. I need to copy file over ssh. This is what I do
me#localhost ~ $ ssh you#remotehost
Then I established ssh so I get
you#remotehost ~ $
I'd like to use scp to copy files from localhost to remotehost. Once I have ssh connection, how do I change to prompt back to me#localhost so that I can use the scp command? Is there a command for that?
Edit: The reason I need the ssh is because after I copied the file I have to execute it. Is there a way to remain in the ssh session and use scp to copy the file that I'm editing at localhost
You do not have to first create an SSH connection to use SCP. Simply use the scp command from your shell, and it will connect to the other server.
Most shells exit with exit. CtrlD may also work.
You can also:
scp /path/to/local-file you#remotehost:/remote/path
Try screen command.
You can use scp on either side. Here are two examples:
If you are on your local host:
scp myfile you#remotehost:
If you are on the remote host:
scp you#<localhost's hostname>:myfile .
Substitute your localhost's hostname for <localhost's hostname> in the second command. If you are behind a router, it will be easier to use the first one.
Both assume that myfile is in the home directory on localhost and is being sent to the home directory on remotehost.
I have some applications, and standard Unix tools sending their output to named-pipes in Solaris, however named pipes can only be read from the local storage (on Solaris), so I can't access them from over the network or place the pipes on an NFS storage for networked access to their output.
Which got me wondering if there was an analogous way to forward the output of command-line tools directly to sockets, say something like:
mksocket mysocket:12345
vmstat 1 > mysocket 2>&1
Netcat is great for this. Here's a page with some common examples.
Usage for your case might look something like this:
Server listens for a connection, then sends output to it:
server$ my_script | nc -l 7777
Remote client connects to server on port 7777, receives data, saves to a log file:
client$ nc server 7777 >> /var/log/archive
netcat (also known as nc) is exactly what you're looking for. It's getting to be reasonably standard, but not available on all systems.
socat seems to be a beefed-up version of netcat, with lots more features, but less commonly available.
On Linux, you can also use /dev/tcp/<host>/<port>. See the Advanced Bash-Scripting Guide for more information.
netcat will help establish a pipe over the network.
You may want to use one of:
ssh: secure (encrypted), already installed out-of-the-box on Solaris - but you have to set up a keypair for non-interactive sessions
e.g. vmstat 2>&1 | ssh -i private.key oss#remote.node "cat >vmstat.out"
netcat: simple to set up - but insecure and open to attacks
see http://www.debian-administration.org/articles/58 etc.
Everyone is on the right track with netcat. But I want to add that if you are piping into nc and expecting a response, you will need to use the -q <seconds> option. From the manual:
-q seconds
after EOF on stdin, wait the specified number of seconds and then quit. If seconds is negative, wait forever.
For instance, if you want to interact with your SSH Agent you can do something like this:
echo -en '\x00\x00\x00\x01\x0b' | nc -q 1 -U $SSH_AUTH_SOCK | strings
A more complete example is at https://gist.github.com/RichardBronosky/514dbbcd20a9ed77661fc3db9d1f93e4
* I stole this from https://ptspts.blogspot.com/2010/06/how-to-use-ssh-agent-programmatically.html