Ranger file manager - Access remote server? - console

Is is possible to access remote servers in Ranger (cli) file manager using sftp/fish/... protocols, example:
sftp://user#server/home/user

It does not look like it would be possible using Ranger. But you can mount remote filesystem using sshfs and then access it as every other file on your local system:
mkdir /mnt/server
sshfs user#server:dir /mnt/server
cd /mnt/server

You can use rclone to mount many different cloud shares like sftp, webdav, s3, google drive, ... to your local filesystem.
Only for ssh I found no rclone remote system.

Related

AWS amplify/dynamo/appsync - how to sync data locally

All I want to do is essentially take the exact dynamdb tables with their data that exist in a remote instance ( e.g. amplify staging environment/api) and import those locally.
I looked at datasync but that seemed to be FE only. I want to take the exact data from staging and sync that data to my local amplify instance - is this even possible? I can't find any information that is helping right now.
Very used to using mongo/postgres etc. and literally being able to take a DB dump and just import that...I may be missing something here?
How about using dynamodump
Download the data from AWS to your local machine:
python dynamodump.py -m backup -r REGION_NAME -s TABLE_NAME
Then import to Local DynamoDB:
dynamodump -m restore -r local -s SOURCE_TABLE_NAME -d LOCAL_TABLE_NAME --host localhost --port 8000
You have to build a custom script that reads from the online DynamoDB and then populate the Local DynamoDB. I found the docker image be just perfect to have an instance, make sure to give the jar file name to prevent the image to be ephemeral and have persistence of data.
Sort of macro instructions:
Download Docker Desktop (if you want)
Start docker desktop and in a terminal ask for Dynamo DB official image:
https://hub.docker.com/r/amazon/dynamodb-local/
docker pull amazon/dynamodb-local
And then run the docker container:
docker run --name dynamodb -p 8000:8000 -d amazon/dynamodb-local -jar DB.jar
Now you can start a python script that get the data from the online DB and copy in the local dynamoDB, as in official docs:
https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/copy-amazon-dynamodb-tables-across-accounts-using-a-custom-implementation.html
Working out the connections to the local container (localhost:8000) you shall be able to copy all the data.
I'm not too well versed on local Amplify instances, but for DynamoDB there is a product which you can use locally called DynamoDB Local.
Full list of download instructions for DynamoDB Local are available here: Downloading And Running DynamoDB Local
If you have docker installed in your local machine, you can easily download and start the DynamoDB Local service with a few commands:
Download
docker pull amazon/dynamodb-local
Run
docker run --name dynamodb -p 8000:8000 -d amazon/dynamodb-local -jar DB.jar
This will allow you to avail of 90% of DynamoDB features locally. However, migrating data from DynamoDB Web Service to DynamoDB local is not something that is provided out of the box. For that, you would need to create a small script which you run locally, read data from your existing table and write to your local instance.
An example of reading from one table and writing to a second can be found in the docs here: Copy Amazon Dynamodb Tables Across Accounts
One change you will have make is manually setting the endpoint_url for DynamoDB Local:
dynamodb_client = boto3.Session(
aws_access_key_id=args['AWS_ACCESS_KEY_ID'],
aws_secret_access_key=args['AWS_SECRET_ACCESS_KEY'],
aws_session_token=args['TEMPORARY_SESSION_TOKEN'],
endpoint_url='YOUR_DDB_LOCAL_ENDPOINT_URL'
).client('dynamodb')

How can I Download Files from Laravel Forge (Wordpress) to Local Machine?

I have installed WordPress on a site through Laravel Forge and would like to download it to start working to improve it at my local machine. How can I download it?
I've tried:
scp forge#myip:/mysite.com /Users/myname/sites
I am getting either this error if I do scp from local:
ssh: connect to host 111.111.111.111(fake_ip_on_forge) port 22: Operation timed out
or this error if I do scp from the remote server on Laravel Forge.
scp: /mysite.com: No such file or directory
It appears that the most likely scenario is that you can't ssh into the machine. First, you need to verify that you can ssh forge#forgeip then your scp command should work just fine, assuming that /mysite.com is the correct directory. You'll also need to copy it recursively, so a scp -r will probably be in order.
Here's some documentation on setting up ssh access in forge
You can also learn more about scp flags in the documentation

Syncing local and remote directories using rsync+ssh+public key as a user different to the ssh key owner

The goal is to sync local and remote folders over ssh.
My current user is user1, and I have a password-less access setup over ssh to a server server1.
I want to sync local folder with a folder on server1 by means of rsync utility.
Normally I would run:
rsync -rtvz /path/to/local/folder server1:/path/to/remote/folder
ssh access works as expected, rsync is able to connect over ssh, but it returns "Permission denied" error because on server1 the folder /path/to/remote/folder is owned by user2:user2. File permissions of the folder do not allow it to be altered by anyone else.
user1 is a sudoer on server1 so sudo su - user2 works during ssh session.
How to forse rsync to switch the user when it ssh'ed to the server?
Adding user1 to the group user2 is not an option because all user/group management on the server is done automatically and replicated from a central repo every X mins, that I have not access to.
Same for changing permissions/ownership of the destination folder: it is updated automatically on a regular basis with a reset of all permissions.
Possible solution coming to my mind is a script that syncs the local folder with a temporary intermediate remote folder owned by user1 on the server, and then syncs two remotes folders as user2.
Googling for a shorter and prettier solution did not yield any success.
I have not tried it by myself, but how about using rsync's '--rsync-path' option?
rsync -rtvz --rsync-path='sudo -u user2 rsync' /path/to/local/folder server1:/path/to/remote/folder
To fix the permissions problem you need to run rsync over an an SSH session that logs in remotely as user2:
rsync avz -e 'ssh -i privatekeyfile' /path/to/local/folder/ user2#server1:/path/to/local/folder
The following answer explains how to setup the SSH keys.
Ant, download fileset from remote machine
Set up password-less access for user1 to access user2#server1, then do:
rsync -rtvz /path/to/local/folder user2#server1:/path/to/remote/folder

What's the syntax and prerequisite for --password-file option in rsync?

I want to store --password-file option that comes with rsync. I don't want to use ssh public_private key encryption. I have tried this command:
rsync -avz --progress --password-file=pass.txt source destination
This says:
The --password-file option may only be used when accessing an rsync daemon.
So, I tried using:
rsync -avz --progress --password-file=pass.txt source destination rsyncd --daemon
But this return various errors like unknown options. Is my sytanx correct? How do I setup rsync daemon in my Debian machine.
That is correct,
--password-file is only applicable when connecting to a rsync daemon.
You probably haven't set it in the daemon itself though, the password you set and the one you use during that call must match.
Edit /etc/rsyncd.secrets, and set the owner/group of that file to root:root with world reading permissions.
#/etc/rsyncd.secrets
root:YourSecretestPassword
To connect to a rsync daemon, use a double colon followed by the module name, and the file or folder to synchronize (instead of a colon when using SSH),
RSYNC_PASSWORD="YourSecretestPassword"; rsync -rtv user#remotehost::module/source/ destination/
NOTE:
this implies abdicating SSH encryption, though the password itself is not sent across the network in plain text, your data is ...
this is already insecure as is, never as the the same password as any of your users account.
For a better understanding of its inner workings (how to give specific IPs/processes the ability to upload to specified areas of the filesystem without the need for a user account): http://transamrit.net/docs/rsync/
After trying a while, I got this to work. Since Im copying from my live server (and routers data) to my local server in my laptop as backup user no problem with password been unencrypted, its secured wired on my laptop at home. First you need to install sshpass if Centos with yum install sshpass then create a user backup and assign a temp password. I listed the -p option in case your ssh port is different than default.
sshpass -p 'password' rsync -vaurP -e 'ssh -p 2222' backup#???.your.ip.???:/somedir/public_data/temp/ /your/localdata/temp
Understand SSH RSA is a better permanente alternative and all that, but this is a quick alternative to backup and restore on the go. It works if you are not too concern about security but more concern about your data been backup locally as in an emergency o data recovery. Your user backup password you can change it once the backup is completed. Its a lot faster to setup when your servers change IPs, users, and its in constant modifications (as routers change config and non static IPs, also when routers are not local and you are backing up clients servers locally, where you dont have always access to do SSH. Some of my clients dont even have SSH installed and they dont want to hassle with creating public keys. On some servers only where you have access on a temporary basis. By the way, if you want to do the restore, just reverse the case. Dont need change much, from the same command shell you can do it reversing the order of target and source directories, and creating another backup user with same temp password on the target. After finish, you delete the backup user or change its passwords on target and/or source servers. You can protect even further, as I have done, replacing the password for a one line file using a bash script for multi server environment. Alternative is to use the -f option so the password does not show in the bash history -f "/path/to/passwordfile" Regards
NOTE: If you want to update only modified files then you should use this parameters -h -v -r -P -t as described here https://unix.stackexchange.com/questions/67539/how-to-rsync-only-new-files
rsync -arv -e \
"sshpass -f '/your/pass.txt' ssh -o StrictHostKeyChecking=no" \
--progress /your/source id#IP:/your/destination
Maybe you have to install "sshpass" if you not.

Using local settings through SSH

Is it possible to have an SSH session use all your local configuration files (.bash_profile, .vimrc, etc..) on login? That way you would have the same configuration for, say, editing files in vim in the remote session.
I just came across two alternatives to just doing a git clone of your dotfiles. I take no credit for either of these and can't say I've used either extensively so I don't know if there are pitfalls to either of these.
sshrc
sshrc is a tool (actually just a big bash function) that copies over local rc-files without permanently writing them to the remove user's $HOME - the idea being that might be a shared admin account that other people use. Appears to be customizable for different remote hosts as well.
.ssh/config and LocalCommand
This blog post suggests a way to automatically run a command when you login to a remote host. It tars and pipes a set of files to the remote, then un-tars them on the remote's $HOME:
Your local ~/.ssh/config would look like this:
Host *
PermitLocalCommand yes
LocalCommand tar c -C${HOME} .bashrc .bash_profile .exports .aliases .inputrc .vimrc .screenrc \
| ssh -o PermitLocalCommand=no %n "tar mx -C${HOME}"
You could modify the above to only run the command on certain hosts (instead of the * wildcard) or customize for different hosts as well. There might be a fair amount of duplication per host with this method - although you could package the whole tar c ... | ssh .. "tar mx .." into a script maybe.
Note the above looks like it clobbers the same files on the remote when you connect, so use with caution.
Use a dotfiles.git repo
What I do is keep all my config files in a dotfiles.git on a central server.
You can set it up so that when you ssh into a remote machine, you automatically pull the latest version of the dotfiles. I do something like this:
ssh myhost
cd ~/dotfiles
git pull --rebase
cd ~
ln -sf dotfiles/$username/linux/.* .
Note:
To put that in a shell script, you can automate the process of executing commands on a remote machine by piping to ssh.
The "$username" is there so that you can share your config files with other people you're working with.
The "ln -sf" creates symbolic links to all your dotfiles, overwriting any local ones, such that ~/.emacs is linked to the version controlled file ~/dotfiles/$username/.emacs.
The use of a "linux" subdirectory is just to allow for configuration changes across platforms. I also have a mac directory under dotfiles/$username/mac. Most of the files in the /mac directory are symlinked from the linux directory as it's very similar, but there are some exceptions.
Finally, note that you can make this even more sophisticated with hostnames and the like rather than just a generic 'linux'. With a dotfiles.git, you can also raid dotfiles from your friends, which is awesome -- everyone has their own set of little tricks and hacks.
No, because it's not SSH using your config files, but the remote shell.
I suggest keeping your config files in Subversion or some other VCS. Here's how I do it.
Well, no, because as Andy Lester says, the remote machine is the one doing the work, and it has no access back to your local machine to get .vimrc ...
On the other hand, you could use sshfs to mount the remote file system locally and edit the files locally. This doesn't require you to install anything on the remote machine. Not sure how efficient it is, maybe not great for editing big files over slow links.
Or Komodo IDE has a neat "Open >> Remote File" option which lets you edit files on remote
machines by scping them back and forth automatically.
I do this kind of things every day. I have about 15 bash rc files and .vimrc, a few vim plugin scripts, .screenrc and some other rc files. I have a sync script (written in bash) which uses the cool rsync command to sync all these files to remote servers. Every time I update some files on my main server, I would call the script to sync them to remote servers.
Setting up a svn/git/hg repository on the main server also works for me but my remote servers need to be repeatedly reinstalled for testing. So I find it's more convenient to use rsync.
A few years ago I also used the rdist tool which can also meet the requirement for most of the time. But now I prefer rsync as it supports incremental sync which is very efficient.
ssh can be configured to pass certain environment variables through to the other (remote side). And since most shells will check some environment variables for additional settings to apply, you can hack that into applying some local settings remotely. But its a bit complicated and most administrators turn off the ssh environment variable pass-through in the sshd config anyways.
You could always just copy the files to the machine before connecting with ssh:
#!/bin/bash
scp ~/.bash_profile ~/.vimrc user#host:
ssh user#host
This works best if you are using keys to login and no one else logs in as that user.
Here's a simple bash script I've used for this purpose. It syncs over some folders I like to have copied over using rsync and then adds the ~/bin folder to the remote machines .bashrc if it's not there already. It works best if you have have copied your ssh keys to each server. I use this approach instead of a "dotfiles repo" as lots of the servers I connect to don't have git on them.
So to use it, you'd do something like this:
./bin_sync_to_machine.sh server1
bin_sync_to_machine.sh
function show_help()
{
echo ""
echo "usage: SERVER {SERVER2 SERVER3 etc...}"
echo ""
exit
}
if [ "$1" == "help" ]
then
show_help
fi
if [ -z "$1" ]
then
show_help
fi
# Sync ~/bin and some dot files to remote server using rsync
for SERVER in $*; do
rsync -avrz --progress ~/bin/ -e ssh $SERVER:~/bin
rsync -avrz --progress ~/.vim/ -e ssh $SERVER:~/.vim
rsync -avrz --progress ~/.vimrc -e ssh $SERVER:~/.vimrc
rsync -avrz --progress ~/.aliases $SERVER:~/.aliases
rsync -avrz --progress ~/.aliases $SERVER:~/.bash_aliases
# Ensure remote server has ~/bin in the path
ssh $SERVER '~/bin/path_add_to_path.sh'
done
path_add_to_path.sh
pathadd() {
if [ -d "$1" ] && [[ ":$PATH:" != *":$1:"* ]]; then
PATH="${PATH:+"$PATH:"}$1"
fi
}
# Add to current path if running in a shell
pathadd ~/bin
# Add to ~/.bashrc
if ! grep -q PATH:~/bin ~/.bashrc; then
echo "PATH=\$PATH:~/bin" >> ~/.bashrc
fi
if ! grep -q source ~/.aliases ~/.bashrc; then
echo "source ~/.aliases" >> ~/.bashrc
fi
I wrote an extremely simple tool for this that will allow you to natively transport your .vimrc file whenever you ssh, by using SSHd built-in config options in a non-standard way.
No additional svn,scp,copy/paste, etc required.
It is simple, lightweight, and works by default on all server configurations I have tested so far.
https://github.com/gWOLF3/viSSHous
I think that https://github.com/fsquillace/kyrat does what you need.
I wrote it long time ago before sshrc was born and it has more benefits compared to sshrc:
It does not require dependencies on xxd for both hosts (which can be unavailable on remote host)
Kyrat uses a more efficient encoding algorithm
It is just ~20 lines of code (really easy to understand!)
No need of root access or any installations to the remote host
For instance:
$> echo "alias q=exit" > ~/.config/kyrat/bashrc
$> kyrat myuser#myserver.com
myserver.com $> q
exit

Resources