Is the editor Atom able to open projects on a remote server? - atom-editor

Atom is able to open a project, and to show the whole tree of the project on the left side, a really nice feature.
Now I'm using SSH on Host OS to access a Guest OS (say Red Hat Enterprise Linux, RHEL) on Virtualbox, is there a way of Atom located in Host OS to open a project located on RHEL?

Well yes there is!
You just need to configure sshfs, optionally with autofs. Then you can access the files as if they are stored locally. I've used this with Atom and it works seamlessly.
Instructions for Ubuntu
Install sshfs
$ sudo apt-get install sshfs
Mount the remote directory on a local mountpoint
$ sshfs [user#]host:[dir] mountpoint
Combining it with autofs
The following link has instructions for a setup using autofs.
Note: This requires you to setup SSH for the root user.
http://www.mccambridge.org/blog/2007/05/totally-seamless-sshfs-under-linux-using-fuse-and-autofs/
Additionally to that post, I've added some tricks for an even more seamless experience.
Enhance performance
I've noticed a significant performance boost by adding this SSH config to /root/.ssh/config:
Ciphers arcfour
Compression no
Note: This does make the connection less secure.
Make it appear as a disk
If you set the mount point to a directory in /media, the mount point will show up as a disk in your file browser. For example /media/sshfs.

I would recommend the Remote sync plugin for this. I have a python environment set up on a linux box and i connect to it from my PC.
It allows me to upload changes automatically when i save a file and also define files to be monitored for changes.

Not 100% what you're looking for, but there's the Remote-Edit package: https://atom.io/packages/remote-edit
This will allow you to define the connection parameters for the server, and will then allow you to browse and edit the files found on the server.

Complement to Remco's sshfs answer above:
If you use different users in the client and server hosts, consider using the 'idmap' option of sshfs.
I use different users in my working host and in the development or testing VMs.
Example:
using option '-o idmap=user' will automatically translate UID/GID of the remote host to the UID/GID of the connecting user in the local host
Files owned by remote user (devuser) in remote host (devhost1) will appear as belonging to the connecting user (locuser) in local host (clienthost)
locuser#clienthost:~$ sshfs devuser#devhost1:/var/www ~/dev/www -o idmap=user
locuser#clienthost:~$ ls -lR ~/dev/www
(...)
-rw-rw-r-- 1 locuser locuser 269 abr 1 11:37 index.html
-rw-rw-r-- 1 locuser locuser 249 abr 3 03:59 page1.html
-rw-rw-r-- 1 locuser locuser 1118 abr 2 15:07 page2.html
-rw-rw-r-- 1 locuser locuser 847 abr 3 03:20 page3.html
(...)
The mapping can also be made explicit (userx <-> usery). For more details see man sshfs

I am writing this answer because none of the other answers worked for me.
Mounting as a directory & browsing with atom (#Remco Haszing answer) was a brilliant one.
but in my case, atom wants to index all of the remote project & its a heavy one. and it gets not responding.
using remote-sync package was good when you working locally then want to upload the files to server.
Actually the remote-edit is the package meant to do this job. (editing files remotely on ssh)
the problem with this is, it has been abandon.
These help me as its replacements:
https://atom.io/packages/remote-edit-ni
https://atom.io/packages/remote-editor

Related

Permission Issue with Docker Volume Driver for Azure File Storage

I am following the readme for this project (https://github.com/Azure/azurefile-dockervolumedriver/blob/master/contrib/init/upstart/README.md), but when I try and mount a volume on a container like this
docker volume create -d azurefile -o share=myshare --name=myvol
docker run -i -t -v myvol:/data busybox
(inside the container)
# cd /data
# touch file.txt
I get this error:
Error response from daemon: VolumeDriver.Mount: mount failed: exit status 32
output="mount.cifs kernel mount options: ip=168.61.57.82,unc=\\\\cmstoragecd.file.core.windows.net\\myshare,vers=3.0,dir_mode=0777,file_mode=0777,user=cmstoragecd,pass=********\nmount
error(13): Permission denied\nRefer to the mount.cifs(8) manual page (e.g. man mount.cifs)\n"
This is running on an Ubuntu 14.04 server on Azure. I have successfully used the extension with similiar servers, but it is now not working. What can I do to debug this?
your answer is correct. CIFS in many Linux distros currently do not have encryption support ––which Azure File Storage requires in cross-region SMB traffic.
Quoting the note at https://azure.microsoft.com/en-us/documentation/articles/storage-how-to-use-files-linux/
Note: The Linux SMB client doesn’t yet support encryption, so mounting a file share from Linux still requires that the client be in the same Azure region as the file share. However, encryption support for Linux is on the roadmap of Linux developers responsible for SMB functionality. Linux distributions that support encryption in the future will be able to mount an Azure File share from anywhere as well.
In the future, please consider directly contacting to us by opening a new issue on our GitHub repository at: https://github.com/Azure/azurefile-dockervolumedriver/issues.
I managed to get around this error by using a storage account in the same region as the Azure VM. Originally I had a VM running in West Europe, using a file share in East US.

SELinux Policy to Allow NGINX Access to Parallels Shared Folders on Mac

I'm trying to keep SELinux enforcing but to allow NGINX to directly access shared OSX folders that are connected via Parallels Desktop.
Host system: Mac OSX 10.10
Parallels Desktop: 10
Running Virtual OS: CentOS 7 (minimal / command line)
I have the the Parallels tools installed and in CentOS I see the shared folder: /media/psf/Shared-Folder
When I set the Nginx server root to that folder I get a 403 Forbidden. I know it is a configuration parameter that needs editing because if I change SELinux to Permissive, the files are served correctly in NGINX.
When checking how the files are mounted I see this:
root root system_u:object_r:removable_t:s0 /media/psf/Shared-Folder/
I can see the 'removable_t' context - however - my issue is that I cannot seem to find a way to allow the httpd service to serve files that are mounted as removable storage.
I have tried:
chcon -R -t public_content_t /media/psf/Shared_Folder/
chcon -R -t httpd_sys_content_t /media/psf/Development-Projects/
and in all cases I get a "chcon: failed to change context of: '...': Operational not supported" error.
Checking /usr/sbin/getsebool -a | grep http I do not see any option to allow httpd to access removable storage mounts.
Last item: I do not believe I can change the way Parallels mounts the shared folders.
Question: Is there a way to keep SELinux enforcing but to allow NGINX to directly access shared OSX folders that are connected via Parallels Desktop?
What you need to do is use semanage.To get it you have to install policycoreutils-python.
The same type of question has already been asked Here. Cheers!

MAMP not connecting to localhost, any solutions?

I have downloaded the newest MAMP version (3.0.5) and I am unable to connect to the localhost. All I get in Google is "Oops! Google Chrome could not find localhost:8888". I have tried all these things...
Re-downloaded MAMP several times and restarted comp
Changed the ports to 80 & 3306
Turned Firewall on and off and added "MAMP" as an incoming connection
Turned Web Sharing on and off
Checked and unchecked options in MAMP Preferences and hit ok/restart
Is there any solutions out there that have worked for you? I know it all varies, but anything to get this going would be a miracle at this point.
Thank you!
I was also experiencing the same issue on my OS X 10.9.4 with MAMP Pro 3.0.5 installed. The fix was very easy: missing info in my hosts file.
Solution found on: http://forum.mamp.info/viewtopic.php?f=2&t=82253
Note - I am using Mountain Lion
1) check your host file
go to folder /etc/
File path is: Your-hard-drive:private:etc:hosts
this is an invisible folder so you will need something that allows you to view invisible files folders on your Mac. I have Pathfinder installed which allows me to do this.
there should be an entry that makes localhost work
For me there was nothing in the hosts file. It was blank
2) Drag it to the desktop, that'll copy it
3) Paste the following into it and save it:
##
# Host Database
#
# localhost is used to configure the loopback interface
# when the system is booting. Do not change this entry.
##
127.0.0.1 localhost
255.255.255.255 broadcasthost
::1 localhost
fe80::1%lo0 localhost
4) Then delete the one in the etc folder (it'll ask for your password)
5) Then move the copy into the etc folder (password again)
6) Then open terminal and paste this in:
sudo killall -HUP mDNSResponder
(It will require your password)
7) I needed to restart my computer and then MAMP started working again!
MampGuy

mount: nfs access denied by server

Am trying to mount a NFS device in my linux machine.
My /etc/fstab is like this,
192.168.0.5:/volume2/Asterisk_Recordings /var/spool/newnfs nfs rsize=32768,wsize=32768,intr,noatime 1 0
My /etc/mtab is like this,
192.168.0.5:/volume2/Asterisk_Recordings /var/spool/newnfs nfs rw,addr=192.168.0.5 0 0
I have enabled NFS in my NAS device.
When i type mount " mount -t nfs -v 192.168.0.5:/volume2/Asterisk_Recordings /var/spool/newnfs/" I get like this,
mount.nfs: timeout set for Thu Aug 1 07:01:04 2013
mount.nfs: trying text-based options 'vers=4,addr=192.168.0.5,clientaddr=192.168.1.1'
mount.nfs: mount(2): Permission denied
mount.nfs: access denied by server while mounting 192.168.0.5:/volume2/Asterisk_Recordings
Any possible reasons?
Thanks in advance.
This error can also occur if the /etc/hosts file on the nfs server maps the hostname of the client to an incorrect IP address, or the IP address of the client to an incorrect hostname. It is quick and easy to check, so worth doing before looking for other problems. Note that, if you do have to change any entries then the nfs-server has to be stopped and re-started, as it reads the hosts file only when it is started.
Is there a config file on the NAS where to put allowances for clients? E.g. in debian based OS the config file is "/etc/exports" and you would put there "/volume2/Asterisk_Recordings 192.168.1.1(rw,sync)" and activate this with "exportfs -a" (your NAS may do this automatically if you update the config via a web interface, I guess.) Check also https://stackoverflow.com/questions/22246477/mounting-nfs-results-in-access-denied-by-server.
Remember to add IP addresses/hostnames of your NFS' clients to /etc/hosts.allow of NFS' server
nfs: clienthost2, clienthost2, clienthost3
You might restart nfs config and nfs service on the NFS server as well as run export again.
systemctl restart nfs-config.service
systemctl status nfs.service
exportfs -arv
I have a Debian 10 system with a Debian 10 VM running inside it. I wanted to access a physical partition from the hard drive on the VM. I mounted the physical drive on the host and exported it. I was not able to mount it on the guest continually getting a access denied error
The solution after many hours was to add the no_all_squash option in the exports file. This is supposed to be the default but I needed to add it explicitly. As soon as I did that the problem went away and I could mount the file system. Unfortunately I could not see the files on the fs.
/media/dev 192.168.100.0/24(rw,sync,no_subtree_check,no_root_squash,no_all_squash)
On the server I could see the files and on the host I could not.
I had to change the line to
/media/dev 192.168.100.0/255.255.255.0(rw,sync,no_subtree_check,no_root_squash,no_all_squash)
to see the actual files that were on the file sets
I saw this error presumably due to an older NFS client and adding -o nfsvers=3 fixed the issue for me e.g. mount -t nfs -o nfsvers=3 x.x.x.x:/nfs_mount /mnt/nfs_mount
Or in /etc/fstab
x.x.x.x://nfs_mount /mnt/nfs_mount nfs proto=tcp,port=2049,nfsvers=3 0 0
Ref: https://www.thegeekdiary.com/mount-nfs-access-denied-by-server-while-mounting-how-to-resolve/

How does scp traffic flow between two remote hosts?

If you issue a scp command between 2 remote servers, will the traffic flow directly between the hosts, or will it flow from Remote1 => Local Machine => Remote2?
For example I issue this command on my laptop:
scp user1#remote1.com:/Files user2#remote2.com:/Files
Newer versions of scp (since 2011) have the option -3 which will route the traffic through your local machine. This is useful if the hosts are on different networks and can't see each other. Found this on SuperUser. From your linked article it seems like normally the hosts will try to connect directly to each other.
Looks like it can be done.
If your linux/bsd/unix or Mac do not have the -3 option, just compile the last version from: http://www.openssh.org/portable.html
It is as simple as:
./configure; make ; sudo make install
It will be installed on /usr/local/bin by default. I just did that on my Mac OS X Lion.

Resources