Programmable, secure FTP replacement - asp.net

We need to move off traditional FTP for security purposes (it transmits it's passwords unencrypted). I am hearing SSH touted as the obvious alternative. However I have been driving FTP from an ASP.NET program interface to automate my web-site development, which is now quite a highly web-enabled process.
Can anyone recommend a secure way to transfer files around which has a program interface that I can drive from ASP.NET?

sharpssh implements sending files via scp.

the question has three subquestions:
1) choosing the secure transfer protocol
The secure version of old FTP exists - it's called FTP/SSL (plain old FTP over SSL encrypted channel). Maybe you can still use your old deployment infrastructure - just check whether it supports the FTPS or FTP/SSL.
You can check details about FTP, FTP/SSL and SFTP differences at http://www.rebex.net/secure-ftp.net/ page.
2) SFTP or FTP/SSL server for Windows
When you choose whether to use SFTP or FTPS you have to deploy the proper server. For FTP/SSL we use the Gene6 (http://www.g6ftpserver.com/) on several servers without problems. There is plenty of FTP/SSL Windows servers so use whatever you want. The situation is a bit more complicated with SFTP server for Windows - there is only a few working implementations. The Bitvise WinHTTPD looks quite promising (http://www.bitvise.com/winsshd).
3) Internet File Transfer Component for ASP.NET
Last part of the solution is secure file transfer from asp.net. There is several components on the market. I would recommend the Rebex File Transfer Pack - it supports both FTP (and FTP/SSL) and SFTP (SSH File Transfer).
Following code shows how to upload a file to the server via SFTP. The code is taken from our Rebex SFTP tutorial page.
// create client, connect and log in
Sftp client = new Sftp();
client.Connect(hostname);
client.Login(username, password);
// upload the 'test.zip' file to the current directory at the server
client.PutFile(#"c:\data\test.zip", "test.zip");
// upload the 'index.html' file to the specified directory at the server
client.PutFile(#"c:\data\index.html", "/wwwroot/index.html");
// download the 'test.zip' file from the current directory at the server
client.GetFile("test.zip", #"c:\data\test.zip");
// download the 'index.html' file from the specified directory at the server
client.GetFile("/wwwroot/index.html", #"c:\data\index.html");
// upload a text using a MemoryStream
string message = "Hello from Rebex SFTP for .NET!";
byte[] data = System.Text.Encoding.Default.GetBytes(message);
System.IO.MemoryStream ms = new System.IO.MemoryStream(data);
client.PutFile(ms, "message.txt");
Martin

We have used a variation of this solution in the past which uses the SSH Factory for .NET

The traditional secure replacement for FTP is SFTP, but if you have enough control over both endpoints, you might consider rsync instead: it is highly configurable, secure just by telling it to use ssh, and far more efficient for keeping two locations in sync.

G'day,
You might like to look at ProFPD.
Heavily customisable. Based on Apache module structure.
From their web site:
ProFTPD grew out of the desire to have a secure and configurable FTP server, and out of a significant admiration of the Apache web server.
We use our adapted version for large scale transfer of web content. Typically 300,000 updates per day.
HTH
cheers,
Rob

Related

Polling SFTP Server for new files

We have a requirement where one of the applications (AppA) will push files (sometimes 15K - 20K files in one go) to SFTP folder. Another Application (AppB) will be polling this folder location to poll the files and upon reading will push Ack file on the same SFTP in different folder. There have been issues with file lost, mismatch between files sent and Ack received. Now, we need to develop one mechanism (as Auditor) to monitor the SFTP location. This **SFTP is on Windows servers. Single file size will be less than 1 MB.
**
We are planning to adopt one of following approach:
Write one external utility in Java, which will keep on polling the SFTP location and download the file locally, read the content of it and store it in local DB for reconciliation.
Pro:
Utility will be a standalone utility no dependency on SFTP server as such (apart from reading the file)
Con:
In addition to AppB, this utility will also be making connection with SFTP and download the file, this may overload the SFTP server and might hamper the regular functioning of AppA and AppB
Write a Java utility/script and deploy it on SFTP server itself as scheduler or it can be configured to listen the respective folder. Upon reading the file locally on SFTP, this utility will call external API to post the content of file and store it in DB for reconciliation.
Pro:
There will be no overhead on SFTP server for connection and file download
File reading will be faster and almost Realtime (in case listener is used)
Con:
Java needs to be installed on SFTP server
This utility will call the external API, and in case of 15K - 20K files, it will slowdown the process of capturing the data and storing in DB
We are currently in design process, need your suggestions and any insight if anyone has implemented similar kind of mechanism.

FTPS for transferring file from unix to mainframe

I am looking for JCL Script/Procedures in mainframe which can facilitate file transfer from Unix server to Mainframe.I am required to do FTPS for the Outbound Jobs (pull the file from UNIX server to mainframe Host).
Rather than a JCL, just do it a shell script. Here is a good site on using such commands:
https://blog.eduonix.com/shell-scripting/how-to-automate-ftp-transfers-in-linux-shell-scripting/
Once you have that working in the shell script in USS, you should be able to call the shell script from a JCL so you can execute it on a scheduled batch job if you need it.
Kenny's suggestion is fairly reasonable. IBM's documentation on how to write JCL for FTP(S)-related tasks is available in their "z/OS Communications Server: IP User's Guide and Commands" publication, IBM Publication No. SC27-3662. The current revision appears to be SC27-3662-30, but later revisions are possible. You can easily find this publication online, and make sure you don't skip the section beginning with the title "Submitting FTP requests in batch." Make sure you set the security options correctly (of course).
Please note that you're asking about FTPS, i.e. TLS encryption applied to either or both (preferably both) of the FTP channels (control and data). SFTP is another file transfer protocol based on SSH that z/OS also supports.
Another possible approach that you'll fairly often find available on z/OS installations is to use IBM MQ Advanced for z/OS's Managed File Transfer (MFT) feature to retrieve the file(s) using FTPS. As the name suggests, this'll be managed and have at least some error handling capabilities.
Yet another possible approach if you prefer HTTPS protocol is to use the z/OS Client Web Enablement Toolkit's HTTPS protocol enabler to fetch the file. That's a built-in, standard feature in all currently supported z/OS releases, and you can use it from a relatively simple REXX script for example. Details are available here (z/OS 2.3 variant of the documentation):
https://www.ibm.com/support/knowledgecenter/en/SSLTBW_2.3.0/com.ibm.zos.v2r3.ieac100/ieac1-cwe-http.htm

What is the difference between in FTP and HTTP?

HTTP is used to display the info and also can be used to transfer files from one host to another host.
FTP is used to transfer files from one host to another.
So I come to this point that FTP and HTTP both are almost doing the same work. Then what is the exact benefit of using FTP while I can do this with the HTTP?
Correct me if I am wrong.
Thanks
FTP is a File Transfer Protocol, for transferring files.
FTP is significantly older, it is a protocol designed to enable the transfer of files over a long-running session. There are a wide array of commands and the intent is to allow you to navigate and browse a remote file system and retrieve files (originally over a separate data connection).
FTP still sees a lot of use, but many files are actually transferred over HTTP instead.
HTTP
The HyperText Transfer Protocol was originally designed to transfer hypertext documents and the various assets needed to render them. In practice, this is the way information is transferred on the web -- html, css, images, data are all transferred between web servers and web browsers, as well as between one server and another this way.
HTTP was designed to retrieve a resource from a URL that may or may not match the remote file system (in many web apps, the structure of the URLs has very little to do with the file locations). There is often only a single request in a single http connection and the data uses the same connection as the request.
So I come to this point that FTP and HTTP both are almost doing the same work.
Not really. FTP can be used for file transfer and not really much more. HTTP is way more flexible since it not only transfers byte streams but also meta data (what kind of data is this), supports implicit compression, client specific responses (like based on supported languages), has more flexible ways for authentication, is tuned for less overhead (i.e. can be faster) ...
Then what is the exact benefit of using FTP while I can do this with the HTTP?
There is no real benefit of FTP today. In contrary, in contrast to alternatives like HTTP the design of FTP leads to lots of problems in today's infrastructure where NAT is heavily used (i.e. multiple internal systems behind a single router with public IP address).
FTP remains mostly in places where clients or servers don't support more modern ways for file exchange. A typical example is cheap web hosting where access to the server to update files is often done by FTP since lots of tools have FTP builtin and it is easy to setup on the server too. Alternatives like WebDAV (HTTP based) or SFTP (SSH based) are less used here since they have less support in clients and servers even though they would offer more security and more flexibility and less problems.

Connect to z/OS Mainframe with SFTP

We have a IBM Host System Z sitting in our cellar. Now the issue is that i have no clue about Mainframes!!! (It's not USS btw.)
The Problem: How can i transfer a file from the host system to a windows machine.
Usually on UNIX systems i would just install and ssh daemon and connect to it via. a program called winscp. After that transfer the file in binary so that it does not convert something (Ultraedit and other Editors can handle this).
With the host system it seems to be a bit difficult as the original format from IBM is EBCDIC and i have no idea if there is a state of the art SFTP server program for the host. Could anybody be so kind and enlighten me? From my current expirience with IT there must be a state of the art sftp connection to that system? I appreciate any help/hints/solutions.
Thank you,
O.S
If the mainframe "sitting in [your] cellar" is running z/OS then it has Unix System Services installed. You can't have z/OS without it.
There is an SFTP package available (for free) for z/OS.
You can test to see about Unix System Services by firing up a 3270 emulator going to ISPF option 3.17, putting a forward slash (/) in the Pathname field and pressing the mainframe Enter key. Another way would be to key OMVS at a TSO READY prompt, which will start up a 3270-based Unix shell.
It is possible that USS is simply not available to you; if you're running any supported release of z/OS then USS is present. There could be concerns about supporting something outside a particular group,
Or, depending on what OS you have running on your System z, it's possible you don't have z/OS. You could have z/VM, you could have zLinux, you could have TPF. However, if you're running zLinux, you have linux, which has sftp installed, and which uses ASCII, not EBCDIC.
As cschneid says, however, if you have z/OS, you have USS. TCP/IP, among other things, won't run without it. Also note that z/OS TCP/IP has an FTP server, so you can connect that way if the FTP server is set up. If security is an issue, FTPS is supported, although it's painful to set up. With the native FTP server, you can convert from EBCDIC to ASCII when you're doing the transfer. There's also an NFS server available. And SMB as well, I believe.
And there's an FTP client available as well, so you could FTP from z/OS to your system, if you wanted to.
Maybe a better thing to do would explain what you're trying to do with the data, and what the data is, in general. You can edit files directly on the mainframe, using either TSO, ISPF, or OMVS editors. There are a lot of data types that the mainframe supports that you're not going to be able to handle on a non-z system unless you go through an export process. I'm not really clear on whether you want to convert the file to ASCII when you transfer it or not.
While the others are correct that all recent releases of z/OS have USS built-in, there's quite a bit of setup work that needs to be done in order for individual users to have access to USS capabilities like SFTP. Out of the box, you get USS "minimal mode" that just has enough of USS to support the TCP/IP stack and so forth. USS "full function mode" requires setup:
HFS filesystems need to be allocated
Your security package needs to be manage UIDs/GIDs for your users
etc etc etc
Still, with these details and with nothing more than the software you're entitled to as part of your z/OS license, you can certainly run SFTP and all the other UNIX style network services you're used to.
A good place to start is the UNIX Services Planning guide: http://publibz.boulder.ibm.com/epubs/pdf/bpxzb2c0.pdf

Efficient reliable incremental HTTP multi-file (or whole directory) upload software

Imagine you have a web site that you want to send a lot of data. Say 40 files totaling the equivalence of 2 hours of upload bandwidth. You expect to have 3 connection losses along the way (think: mobile data connection, WLAN vs. microwave). You can't be bothered to retry again and again. This should be automated. Interruptions should not cause more data loss than neccessary. Retrying a complete file is a waste of time and bandwidth.
So here is the question: Is there a software package or framework that
synchronizes a local directory (contents) to the server via HTTP,
is multi-platform (Win XP/Vista/7, MacOS X, Linux),
can be delivered as one self-contained executable,
recovers partially uploades files after interrupted network connections or client restarts,
can be generated on a server to include authentication tokens and upload target,
can be made super simple to use
or what would be a good way to build one?
Options I have found until now:
Neat packaging of rsync. This requires an rsync (server) instance on the server side that is aware of a privilege system.
A custom Flash program. As I understand, Flash 10 is able to read a local file as a bytearray (indicated here) and is obviously able to speak HTTP to the originating server. Seen in question 1344131 ("Upload image thumbnail to server, without uploading whole image").
A custom native application for each platform.
Thanks for any hints!
Related work:
HTML5 will allow multiple files to be uploaded or at least selected for upload "at once". See here for example. This is agnostic to the local files and does not feature recovery of a failed upload.
Efficient way to implement a client multiple file upload service basically asks for SWFUpload or YUIUpload (Flash-based multi-file uploaders, otherwise "stupid")
A comment in question 997253 suggests JUpload - I think using a Java applet will at least require the user to grant additional rights so it can access local files
GearsUploader seems great but requires Google Gears - that is going away soon

Resources