Web interface for SFTP server - sftp

An SFTP client such as CuteFTP or Filezilla provides a rick user interface for an SFTP server. These are clients that are installed locally on the user's PC. Instead of a client installed at the user's side, is it possible to set up a web-based user interface on the SFTP server, so that a user with only a browser is able to access the files on the server? Are such open source or commerical products available that can be deployed on the SFTP server for enhancing the file transfer experience?
Note: The base server needs to be SFTP as there will be scripts that clients will be using to transfer files in a non-interactive manner. For interactive usage, I am looking for a web interface that be serve as an add-on.

I suggest you check out: https://filebrowser.org/features
There is another good one called "droppy" but its no longer active and apparently has too many git forks at the moment to know where it will go.
The question you need to answer is the application value/scope.
You can always go with Dropbox or something too depending on the user's intended application.

Related

How to access Active Directory from remote machine in c#

I want to know is it possible to access my client Active Directory from my cloud application which is developed in C#. If yes then please provide the solution
Assuming required network connectivity is available, yes. What "required network connectivity" entails depends on the actual access mechanism being used. As an example, accessing Active Directory via secure LDAP requires TCP port 636 be open from the source to the domain controller.
Since it's not always feasible/practical/"a good idea" to open ports between cloud hosting sources and Active Directory domain controllers, you can look into ADFS (Active Directory Federation Services) which is a federated identify framework you can expose to the Internet and then use from within client applications (and we've got a good number of third party vendors that support authentication and directory data retrieval through ADFS as well).
As to the solution -- there are examples all over the Internet. Search here, search Github, search the Internet in general.

Making use of ssh keys for authentication in other applications?

Let's say I want to set up a poor man's authentication scheme for a simple network service.
I don't want to bother with username/password authentication, for simplicity I just want to have a list of public keys in my application and anyone who can prove they are the owner of that key can use my service.
For the purposes of my application it would greatly simplify the authentication process since all my users are on the local network and they all use Unix. Anytime I onboard a new user I can just ask them for their ssh public key.
Is there a simple way to reuse the mechanism involved in ssh public key authentication in a non-ssh application? This is question is intended to be language agnostic.
If you just have a list of users that can use your application and you have no need to see who did what.
You can setup your server so that it listens only on localhost (127.1) rather than 0.0.0.0, and provide a restricted sshd, forwarding the port required to connect to the application
~/.ssh/authorized_keys will provide a list of the authorized keys that can be used.
ssh -I private_key_file <hostname> -L 3000:localhost:3000
For a basic setup and help with configuring your sshd, check out this answer:
https://askubuntu.com/questions/48129/how-to-create-a-restricted-ssh-user-for-port-forwarding
Note: Be warned that if you don't lock it down, any user will have full shell access on your box where the machine is hosted.
A dirty hack from top of my head: could you wrap the application so that it would create an actual SSH tunnel from localhost to your server, and use that for ?
Assuming you are talking about a web based application. What you are really looking for is X.509 Client certificates (1.3.6.1.5.5.7.3.2). This will allow you to identify a user individually to your application.
These face the same issues that are usually faced when looking at key distribution. Which is generally considered a hard problem.
If you wanted to head down this road here is what you would need to do.
Generate a root certificate (once)
Setup web server with appropriate modules to parse the certificate (nginx/apache)
Generate a certificate for each user (openssl)
Download cerificiate from centralized server. (maybe use their ssh pub key here)
Install the x509 cert locally (OS Dependent)
On the server side, you would need to process the cert as part of the web-server (nginx or apache should have modules to do this) and then pass the name onto your application as a header field which you can then process internally.
This is a much better security solution than usernames and passwords, however is complex because of the key distribution issue. Most people wouldn't bother since in most applications it is easy enough to integrate logins with LDAP or radius.

Notify or update data in browser with asterisk

I have asterisk on a server 104.x.x.x and a main website on another server(204.x.x.x). Now I want to update the browser when someone call a sip number from my asterisk. Is there a better way of doing it? What I'm thinking is to expose an api that will update my telephony system database and do a ajax pooling or websocket on the browser from my website and call that api from dialplan via AGI method, but not sure if that is possible. Vicidial and other telephony system software works this way because their web application was also installed on the same server as asterisk. What this softwares do is call an external php or other language script from their dialplan
You should use the Asterisk manager API:
http://www.voip-info.org/wiki/view/Asterisk+manager+API
http://www.voip-info.org/wiki/view/Asterisk+Manager+API+Action+Monitor
to monitor calls from remote server.
Please check the manager.conf file, on how to allow access to a remote IP,
here's an example:
[usernamehere]
secret=yourpasswordhere
deny=0.0.0.0/0.0.0.0
permit=204.0.0.1/255.255.255.255
read=all,system,call,log,verbose,command,agent,user,originate
;write=command,call,originate
displayconnects=yes
You only need the "write" part if you intend to interact from the remote location back, like hanging up a call...

Sending files from server to client with ASP.NET

I am developing a C# ASP.NET 4.0 application that will reside on a Windows Server 2003. By mean of accessing this application through a network computer, any user would be able to upload files to the windows server. But also, once these files are stored on server, he/she would be able to copy these files from the windows server to another networked computer.
I have found a way to upload files to a specified location on the server disk,
but now I need to send these files that are on server disk to the client computers.
My question is: Is there any way to send or copy files from server to other client computers (not the one that is accessing the web service) without needing a program recieving those files on the client computers? FTP, WCF, cmd commands, sockets?
Any idea?
If you want users of your webapp to download files, I'd look into an "ashx generic handler." It will allow you to send files back down to clients over HTTP(s).
If you are looking to have remote users, tell your webserver to copy files to other servers ON THE SAME LAN AS THE SERVER, you would write using normal System.IO operations.
Over a LAN, if you have the correct permissions and so on, you can write to a disk on a different machine using File.Copy -- there's nothing special about that.
If we're talking about remote machines over the internet, that's a different story. Something has to be listening whether it's FTP, WCF, DropBox, etc.
If the problem is that it can be painful to get something like WCF to work from a client due to problems like firewall issues under Windows 7, you could take a different route and have the client periodically ping the server looking for new content. To give the server a point of reference, the ping could contain the name or creation date of the most recent file received. The server could reply with a list of new files, and then the client could make several WCF calls, one by one, to pull the content down. This pattern keeps all the client traffic outbound.
You can if you can run the program as an account that has access to that computer. However having this sort of access on your network that would grant access to the outside world to put an unfiltered file on your internal network is just asking to be hacked.
Finally, I decided to install a FileZilla FTP server on each client computer and my page is working very well. But another option is to create a work group in the windows server and put every client computer to work in this work group, so that Windows server have access to the computers in the same work group.
Here are some links that may help to create the work groups:
http://helpdeskgeek.com/networking/cannot-see-other-computers-on-network-in-my-network-places/
http://www.computing.net/answers/windows-2003/server-2003-workgroup-setup-/1004.html

Will direct access to a webdav-mounted file cause problems?

I'm thinking about configuring the remind calendar program so that I can use the same .reminders file from my Ubuntu box at home and from my Windows box at work. What I'm going to try to do is to make the directory on my home machine that contains the file externally visible through webdav on Apache. (Security doesn't really concern me, because my home firewall only forwards ssh, to hit port 80 my my home box, you need to use ssh tunneling.)
Now my understanding is that webdav was designed to arbitrate simultaneous access attempts. My question is whether this is compatible with direct file access from the host machine. That is, I understand that if I have two or more remote webdav clients trying to edit the same file, the webdav protocol is supposed to provide locking, so that only one client can have access, and hence the file will not be corrupted.
My question is whether these protections will also protect against local edits going through the filesystem, rather than through webdav. Should I mount the webdav directory, on the host machine, and direct all local edits through the webdav mount? Or is this unnecessary?
(In this case, with only me accessing the file, it's exceedingly unlikely that I'd get simultaneous edits, but I like to understand how systems are supposed to work ;)
If you're not accessing the files under the WebDAV protocol, you're not honoring locks set via LOCK and UNLOCK methods and therefore will open to potential to overwrite changes made by another client. This situation is described in the WebDAV RFC here: https://www.rfc-editor.org/rfc/rfc4918#section-7.2

Resources