While I am trying to mount NFS GoFlexHome onto Ubuntu using cifs-utils, the below error props up.
sb#sb-Virtual-Machine:~$ sudo mount -t cifs "//192.168.1.14/bezgoan" -o user=bezgoan,vers=1.0 /mnt
Password for bezgoan#//192.168.1.14/bezgoan: ************
Retrying with upper case share name
mount error(6): No such device or address
Refer to the mount.cifs(8) manual page (e.g. man mount.cifs)
I am able to directly load the NFS via the File Browser utility of Ubuntu, but the command line errors out.
Tried using smbclient as well. SmbClient doens't return the smb> prompt, but shows list of shares available
sb#sb-Virtual-Machine:~$ sudo smbclient -L //192.168.1.14/bezgoan -U bezgoan
WARNING: The "syslog" option is deprecated
Enter WORKGROUP\bezgoan's password:
Sharename Type Comment
--------- ---- -------
GoFlex Home Personal Disk GoFlex Home (GoFlex Home Personal)
GoFlex Home Backup Disk GoFlex Home (GoFlex Home Backup)
GoFlex Home Public Disk GoFlex Home (GoFlex Home Public)
External Storage Disk GoFlex Home (External Storage)
IPC$ IPC IPC Service (GoFlex Home)
GoFlex_Home Printer GoFlex Home usb port
Reconnecting with SMB1 for workgroup listing.
Server Comment
--------- -------
Workgroup Master
--------- -------
HOME BTHUB5
SEAGATEGROUP GOFLEX_HOME
Was able to connect by specifying the smbClient version to be 1.0 in this case.
Related
Openstack VM's File system went into read-only mode and rebooted it. After the reboot I'm getting grub menu and auto booting from the Kernel and seeing only blank screen in Openstck dashboard console.
I tried nova rescue but failed with below error,
cannot be rescued: Cannot rescue a volume-backed instance (HTTP 400)
I did edited the grub and enetered single/rescue mode to fix the file system error but still I'm getting blank screen after ctrl+x in grub edit.
I want to bring up the vm instance and how do I fix the file system error.
File system error happened as of vm's backend storage, ceph sds node's all went down as of power failure and restored back.
I'm using RHOSP 13 and VM's image is RHEL 7.
I did alternate way to get into rescue mode and fixed the file system error.
ssh to compute Host of the instance and attach ISO image to the instance as it's backend is KVM (inside the file point the cdrom to ISO image location and boot dev to cdrom using virsh edit)
virsh edit vmname
Make the instance to active state from openstack cli command
nova reset-state --active instancename
Start the VM from compute Host
virsh start vmname
Get into openstack dashboard console and repair the file system accordingly and shutdown the instance
Troubleshooting
Rescue mode
continue to shell
vgscan -v
vgchange -a y
lvscan
e2fsck /dev/xxx or xfs_repair /dev/xxx
From openstack cli start the instance normally after file system fix
nova start instancename
Atom is able to open a project, and to show the whole tree of the project on the left side, a really nice feature.
Now I'm using SSH on Host OS to access a Guest OS (say Red Hat Enterprise Linux, RHEL) on Virtualbox, is there a way of Atom located in Host OS to open a project located on RHEL?
Well yes there is!
You just need to configure sshfs, optionally with autofs. Then you can access the files as if they are stored locally. I've used this with Atom and it works seamlessly.
Instructions for Ubuntu
Install sshfs
$ sudo apt-get install sshfs
Mount the remote directory on a local mountpoint
$ sshfs [user#]host:[dir] mountpoint
Combining it with autofs
The following link has instructions for a setup using autofs.
Note: This requires you to setup SSH for the root user.
http://www.mccambridge.org/blog/2007/05/totally-seamless-sshfs-under-linux-using-fuse-and-autofs/
Additionally to that post, I've added some tricks for an even more seamless experience.
Enhance performance
I've noticed a significant performance boost by adding this SSH config to /root/.ssh/config:
Ciphers arcfour
Compression no
Note: This does make the connection less secure.
Make it appear as a disk
If you set the mount point to a directory in /media, the mount point will show up as a disk in your file browser. For example /media/sshfs.
I would recommend the Remote sync plugin for this. I have a python environment set up on a linux box and i connect to it from my PC.
It allows me to upload changes automatically when i save a file and also define files to be monitored for changes.
Not 100% what you're looking for, but there's the Remote-Edit package: https://atom.io/packages/remote-edit
This will allow you to define the connection parameters for the server, and will then allow you to browse and edit the files found on the server.
Complement to Remco's sshfs answer above:
If you use different users in the client and server hosts, consider using the 'idmap' option of sshfs.
I use different users in my working host and in the development or testing VMs.
Example:
using option '-o idmap=user' will automatically translate UID/GID of the remote host to the UID/GID of the connecting user in the local host
Files owned by remote user (devuser) in remote host (devhost1) will appear as belonging to the connecting user (locuser) in local host (clienthost)
locuser#clienthost:~$ sshfs devuser#devhost1:/var/www ~/dev/www -o idmap=user
locuser#clienthost:~$ ls -lR ~/dev/www
(...)
-rw-rw-r-- 1 locuser locuser 269 abr 1 11:37 index.html
-rw-rw-r-- 1 locuser locuser 249 abr 3 03:59 page1.html
-rw-rw-r-- 1 locuser locuser 1118 abr 2 15:07 page2.html
-rw-rw-r-- 1 locuser locuser 847 abr 3 03:20 page3.html
(...)
The mapping can also be made explicit (userx <-> usery). For more details see man sshfs
I am writing this answer because none of the other answers worked for me.
Mounting as a directory & browsing with atom (#Remco Haszing answer) was a brilliant one.
but in my case, atom wants to index all of the remote project & its a heavy one. and it gets not responding.
using remote-sync package was good when you working locally then want to upload the files to server.
Actually the remote-edit is the package meant to do this job. (editing files remotely on ssh)
the problem with this is, it has been abandon.
These help me as its replacements:
https://atom.io/packages/remote-edit-ni
https://atom.io/packages/remote-editor
I am new to salt-ssh and I have gotten it to work successfully for setting up a remote system. However, I have a login issue that I don't know how to address. What is happening is that when I try to run the salt-ssh commands I have to fight with then initial login process before eventually it just works. I am looking to see if I can narrow down what is causing me to have to fight with login process.
I am using OS X to run my salt-ssh commands against an ubuntu vagrant vm.
I have added my root user's ssh key to the root user authorized_keys on the vagrant vm. I have verified that I can log into the system using ssh without any issues
sudo ssh root#192.168.33.10
Here are what my config files look like:
roster
managed:
host: 192.168.33.10
user: root
sudo: true
Saltfile
salt-ssh:
config_dir: /users/vmcilwain/projects/salt-ssh-rails
roster_file: /users/vmcilwain/projects/salt-ssh-rails/roster
log_file: /users/vmcilwain/projects/salt-ssh-rails/saltlog.txt
master
file_roots:
base:
- /users/vmcilwain/projects/salt-ssh-rails/states
pillar_roots:
base:
- /users/vmcilwain/projects/salt-ssh-rails/pillars
I run this command:
sudo salt-ssh -i '*' test.ping
I enter my local user's password and I get this output
Permission denied for host 192.168.33.10, do you want to deploy the salt-ssh key? (password required):
[Y/n]
This is where my fight is. If the vagrant vm has the ssh key for the user I am executing salt-ssh as, why am I being told that permission is denied? Especially when I verified I could ssh into the system without using salt-ssh.
Clicking yes prompts me for the remote root user's password, which I didn't set and don't necessarily want to since an ssh key should have worked.
I'm hoping someone can tell me the best way to setup connections between both systems so that I don't have to have this fight every time.
I needed to set the priv in my roster to the rsa key that I am using to connect to the remote host:
priv: /Users/vmcilwain/.ssh/id_rsa
I have an AWS EC2 instance running an Ubuntu 12.04 web server that I host Wordpress on. For Wordpress to update, it's asking me to supply FTP credentials. I have set up FTP according to this post: http://stephen-white.blogspot.co.uk/2012/05/how-to-set-up-wordpress-on-amazon-ec2_31.html
But the FTP user I created (ftpuser) can't log in. WP only gives very vague errors, but I tried using FTP in the OSX terminal, which gives 'Login incorrect', but the password is definitely correct. I can FTP in using my normal username and password.
This is the content of my vsftpd.conf file (I've removed all commented out lines):
listen=YES
anonymous_enable=NO
local_enable=YES
write_enable=YES
local_umask=022
dirmessage_enable=YES
use_localtime=YES
xferlog_enable=YES
connect_from_port_20=YES
secure_chroot_dir=/var/run/vsftpd/empty
pasv_enable=YES
pasv_min_port=14000
pasv_max_port=14050
port_enable=YES
pasv_address=54.241.13.224
pasv_addr_resolve=NO
This is an nmap of the servers ports:
PORT STATE SERVICE
20/tcp closed ftp-data
21/tcp open ftp
22/tcp open ssh
80/tcp open http
443/tcp closed https
14000/tcp closed unknown
The /var/www (where I have Wordpress installed) folder is owned by ftpuser and this is the entry for the ftpuser in the file /etc/passwd:
ftpuser:x:1001:1001::/var/www:/sbin/nologin
I'm only an amateur server admin, so haven't a full clue of what I'm doing. Anyone have any ideas why this is happening and what needs to be done?
If you are receiving the following error message "Login incorrect" on AWS EC2:
331 Please specify the password.
Password:
530 Login incorrect.
ftp: Login failed
There is a problem with login using Shell. To overcome this there is one further step missing after the following in that Blog Post:
Add an FTP user, giving access only to the WordPress files and for additional security >ensuring the user can not open a shell:
useradd ftpuser -d /var/www/html -s /sbin/nologin
Add the following:
Add /usr/sbin/nologin on to the last line of /etc/shells file:
$ vi /etc/shells
/usr/sbin/nologin
Try logging in again using your FTP client. That's how I got it working on my instances.
I used this code from the command prompt on a windows box (linux machine is at work):
ftp -u ftp://cran.R-project.org/incoming/ qdap_0.1.0.tar.gz
I used the info from:
https://github.com/hadley/devtools/wiki/Release
http://cran.r-project.org/doc/manuals/R-exts.html#Submitting-a-package-to-CRAN
I expected to see it show up here: ftp://cran.r-project.org/incoming/ but I do not see it.
Am I just being impatient or did my package not upload? Here is the command line output:
C:\Users\trinker\GitHub>ftp -u ftp://cran.R-project.org/incoming/ qdap_0.1.0.tar
.gz
Transfers files to and from a computer running an FTP server service
(sometimes called a daemon). Ftp can be used interactively.
FTP [-v] [-d] [-i] [-n] [-g] [-s:filename] [-a] [-A] [-x:sendbuffer] [-r:recvbuf
fer] [-b:asyncbuffers] [-w:windowsize] [host]
-v Suppresses display of remote server responses.
-n Suppresses auto-login upon initial connection.
-i Turns off interactive prompting during multiple file
transfers.
-d Enables debugging.
-g Disables filename globbing (see GLOB command).
-s:filename Specifies a text file containing FTP commands; the
commands will automatically run after FTP starts.
-a Use any local interface when binding data connection.
-A login as anonymous.
-x:send sockbuf Overrides the default SO_SNDBUF size of 8192.
-r:recv sockbuf Overrides the default SO_RCVBUF size of 8192.
-b:async count Overrides the default async count of 3
-w:windowsize Overrides the default transfer buffer size of 65535.
host Specifies the host name or IP address of the remote
host to connect to.
Notes:
- mget and mput commands take y/n/q for yes/no/quit.
- Use Control-C to abort commands.
(This was previously a comment and is being transferred to an answer here.)
Make sure you are not looking at a page cached earlier by your browser.
To perform the actual upload you might want to try the free cross platform FileZilla FTP software. You can upload and concurrently view the contents of the source directory on your machine (in the left pane) and the target directory on CRAN (in the right pane) and view a log of what is happening in the top pane and a progress indicator in the bottom pane. It also has a site manager to store the sites you upload to so you don't need to keep typing in their URL each time you do an upload.