I am able via sftp using the same credentials on WinSCP, so why would I get an error message on Atom's remote edit package?
I corrected this by modifying the sshd_config file on the server.
sudo vi /etc/ssh/sshd_config
I modified the following line:
PasswordAuthentication yes #changed from no to yes
then I restarted the ssh daemon:
sudo service ssh restart
and that did the trick. I believe that Atom is sending passwords in clear text to the server, so using password authentication may cause issues unless you have PasswordAuthentication set to yes.
Related
I have a CentOS server where I have installed the vsftpd service, however I am getting the error
bash: sftp: command not found
Even the which sftp command can't find this service.
Detailed steps below :
As root:
yum install vsftpd
Total download size: 139 k
Is this ok [y/N]: **y**
Configure:
vi /etc/vsftpd/vsftpd.conf
Change anonymous_enable=YES to anonymous_enable=NO
Add userlist_deny=NO after userlist_enable
Add allowed users:
vi /etc/vsftpd/user_list
Replace contents with:
vsftpd userlist
userlist_deny=NO so only allow users in this file
user
Turn on Vsftpd service
chkconfig vsftpd on
Start the service
service vsftpd start
Can someone help figuring out what I'm doing wrong ?
sftp binary is provided by the openssh-clients package. Install that before:
yum install openssh-clients
then you can run sftp.
Assuming the vsftpd daemon is now running and can get through any firewall you have, you need to use an ftp client to connect to the server.
yum install ftp
ftp x.x.x.x <-- IP address of server
That will show that it is working. Remotely you will need a client such as Filezilla.
NFS Mount is not working in my RHEL 7 AWS instance.
When I do a
mount -o nfsvers=3 10.10.11.10:/ndvp2 /root/mountme2/
I get the error:
mount.nfs: requested NFS version or transport protocol is not supported
Can anyone point me where I am wrong?
Thanks.
Check the nfs service is started or reboot the nfs service.
sudo systemctl status nfs-kernel-server
In my case this package was not running and the issue was in /etc/exports file where i was having same IP address for two machines.
So i commented one ip address for the machine and restarted nf-kernel-server using
sudo systemctl restart nfs-kernel-server and reload the machine.
It worked.
A precision which might be useful for the dump (like me): systemctl status nfs-server.service and systemctl start nfs-server.service must be executed on the server!
Some additional data
If, like me, you've deleted a VM without shutting it down right you might also need to manually edit the file /etc/exports because NFS is trying to connect to it and fails but doesn't continue with the next, it just dies.
After that you can manually restart as mentioned in other answers.
In my case, a simple reload didn't suffice. I had to perform a full restart:
sudo systemctl status nfs-kernel-server
In my case, it didn't work correctly with version NFS 4.1.
So in Vargantfile in each place where is type: 'nfs' I added coma and nfs_version: 4, nfs_udp: false
Here is more detailing explanation NFS
If you're giving a specific protocol to connect with, also check to make sure your NFS server has that protocol enabled.
I got this error when trying to start up a Vagrant box, and my nfs server was running. It turns out that the command Vagrant uses is:
mount -o vers=3,udp,rw,actimeo=1 192.168.56.1:/dir/on/host /vagrant
Which specifically asks for UDP. My server was running but it was not configured to enable connecting over UDP. After consulting /etc/nfs.conf, I created /etc/nfs.conf.d/10-enable-udp.conf with the following contents to enable udp:
[nfsd]
udp=y
The name of the file doesn't matter, as long as it's in the conf.d directory and ends in .conf. Depending on your distribution it may be configured differently. You can directly edit nfs.conf, but using a conf.d file is more likely to preserve the changes after upgrading your system.
Try to ping IP address of the server "ping " from client "ping , if you get reply then install nfs server on the host. Then edit /etc/exports file don't forget to add port along with IP address
I got the solution: make an entry in nfs server /etc/nfsmount.conf with Defaultvers=3 .
There will # Defaultvers=3 just unhash it and then mount on nfs client.
Issue will be resolved!
We are using centos7 .If tried the below way with pem file included scp works but when pem file is removed its not working. Code was working earlier without pem file . After We moved to a different web server we are having Host key verification failed issues.
scp -i/home/centos/sshkeys/test.pem root#77.79.77.72:/usr/local//2016/Aug/31/ggea98c0-6f0f-11e6-86d9-2573a2e556aa.wav /var/www/html/tmp/ggea98c0-6f0f-11e6-86d9-2573a2e556aa.wav
Maybe your key was registered in ~/.ssh/config or it was your default key in ~/.ssh ? Check on the old server ?
Edited:
For example this is what I put in ~/.ssh/config
Host myserver
Hostname 52.100.100.100
User ubuntu
IdentityFile ~/dev/application/server-key.pem
It allow me to connect simply by ssh myserver. Maybe it was something like this that you had on your server.
I am new to salt-ssh and I have gotten it to work successfully for setting up a remote system. However, I have a login issue that I don't know how to address. What is happening is that when I try to run the salt-ssh commands I have to fight with then initial login process before eventually it just works. I am looking to see if I can narrow down what is causing me to have to fight with login process.
I am using OS X to run my salt-ssh commands against an ubuntu vagrant vm.
I have added my root user's ssh key to the root user authorized_keys on the vagrant vm. I have verified that I can log into the system using ssh without any issues
sudo ssh root#192.168.33.10
Here are what my config files look like:
roster
managed:
host: 192.168.33.10
user: root
sudo: true
Saltfile
salt-ssh:
config_dir: /users/vmcilwain/projects/salt-ssh-rails
roster_file: /users/vmcilwain/projects/salt-ssh-rails/roster
log_file: /users/vmcilwain/projects/salt-ssh-rails/saltlog.txt
master
file_roots:
base:
- /users/vmcilwain/projects/salt-ssh-rails/states
pillar_roots:
base:
- /users/vmcilwain/projects/salt-ssh-rails/pillars
I run this command:
sudo salt-ssh -i '*' test.ping
I enter my local user's password and I get this output
Permission denied for host 192.168.33.10, do you want to deploy the salt-ssh key? (password required):
[Y/n]
This is where my fight is. If the vagrant vm has the ssh key for the user I am executing salt-ssh as, why am I being told that permission is denied? Especially when I verified I could ssh into the system without using salt-ssh.
Clicking yes prompts me for the remote root user's password, which I didn't set and don't necessarily want to since an ssh key should have worked.
I'm hoping someone can tell me the best way to setup connections between both systems so that I don't have to have this fight every time.
I needed to set the priv in my roster to the rsa key that I am using to connect to the remote host:
priv: /Users/vmcilwain/.ssh/id_rsa
How can we verify that SFTP access has been granted on a server, without installing any software/tools?
Most servers have curl and scp installed, which you can use to log into an SFTP server. To test if your credentials work using curl, you could do this:
$ curl -u username sftp://example.org/
Enter host password for user 'username':
Enter your password and if it works you'll get a listing of files (like ls -al), if it doesn't work you'll get an error like this:
curl: (67) Authentication failure
You could also try using scp:
$ scp username#example.org:testing .
Password:
scp: testing: No such file or directory
This verifies that you that you were able to log in, but it couldn't find the testing file. If you weren't able to log in you'd get a message like this:
Permission denied, please try again.
Received disconnect from example.org: 2: ...error message...
One of the many ways to check for SFTP access using password based authentication:
sftp username#serverName
or
sftp username#serverIP
And then entering password.
You will get "Permission denied, please try again." message if it fails otherwise you will be allowed inside the server with screen-
sftp>
You can test it fully works with commands like ls, mkdir etc.
Try logging in.
Not being snarky -- that really is probably the simplest way. By 'verify[ing] that SFTP access has been granted," what you're really doing is checking is a particular l/p pair is recognized by the server.
Alternatively, other than doing the "sftp -v" command mentioned above, you can always cat the SSH/SFTP logs stored on any server running sshd and direct them to a file for viewing.
A command set like the following would work, where 1.1.1 would be the /24 of the block you are trying to search.
cd /var/log/
cat secure.4 secure.3 secure.2 secure.1 secure |grep sshd| grep -v 1.1.1> /tmp/secure.sshd.txt
gzip -9 /tmp/secure.sshd.txt
G'day,
What about telnet on to port 115 (if we're talking Simple FTP) and see what happens when you connect. If you don't get refused try sending a USER command, then a PASS command, and then a QUIT command.
HTH
cheers,
In SFTP , the authentication can be of following types :
1. Password based authetication
2. Key based authentication
But if u r going for key based authentication then u have to prepare setup according to that and
proceed the login procedure.If the key based authentication fails it automatically asks for password means it automatically switches to password based mode. By the way if u want to verify u can use this on linux :
"ssh -v user#IP "
It will show u all the debug messages , and if the authentication is passed u will be logged in otherwise u will get "Permission denied". Hope this will help u.