How is FTP supposed to work with QNetworkAccessManager? - qt

So QFtp was removed because it had bad design they say. Whatever.
However, for new applications, it is recommended to use
QNetworkAccessManager and QNetworkReply, as those classes possess a
simpler, yet more powerful API. (from docs)
I looked up QNetworkAccessManager and it's just confusing. It's not socket and doesn't even seem to provide raw socket methods. But it's also not FTP. It provides HTTP related functions like cookieJar, post, get... Can someone remind me how does FTP respond to HTTP data?
So this question consists of two parts:
What are we basically expected to do to do a simple connection to FTP server and one basic operation, like dir listing.
How does it work on low-level, and why.
I have of course found some code and I'm now trying to get it to work, but the real problem is that it makes no sense to me.

Current FTP back-end of QNetworkAccessManager implements get() and put() operations. So, it should be enough to download or to upload files using FTP protocol.
The protocol used by QNetworkAccessManager is defined by URL scheme.
So, for FTP URLs ("ftp://..."), the functions get() or put() create connection using FTP protocol.
Username and password are taken from QUrl::userName() and QUrl::password().
So, the key words in the docs are "a simpler API". It is not needed to execute manually connect, login, cd and get commands. Thus, it is not needed to implement own state machine to manage many FTP states in asynchronous application as it was before with QFtp
QNetworkAccessManager provides much simpler API for downloading/uploading files.
For other FTP commands some external module should be used like Qt Ftp.

Related

LDAP Proxy with inspection/modification of requests and responses

I need to build an LDAP proxy that I can program to inspect and modify the LDAP requests and responses - some of the LDAP requests/responses will simply be passed through, but for others I might want to send two different requests to the server and then combine the results (that's just one example - there will be other use cases).
I've looked at the proxying options documented for OpenLDAP's slapd, and I see that it has quite flexible configuration and 'overlays', but no capability to insert custom code.
So I think that's not a solution, unless slapd's source code is easy to modify, to insert my own modules plus hooks to/from the existing code (?)
An alternative would be to start with a friendly TCP/IP framework library (or even a complete TCP/IP proxy). Then I can link to an ASN.1 decoding/encoding library, and write the rest myself.
I'd prefer to avoid having to write (& learn) all the TCP/IP connection/message handling and event loop myself.
So I'm looking for the most complete starting point that does the hard work and gives me the flexibility to write what I need. Typical lazy/greedy approach :-)
Must be open source, ideally in C or C++, and I'll probably be targetting RHEL/CentOS 8 in a container.

Is it insecure to execute code via an HTTP URL?

I'm suspicious of the installation mechanism of Bioconductor. It looks like it is just executing (via source()) the R script from an HTTP URL. Isn't this an insecure approach vulnerable to a man-in-the-middle attack? I would think that they should be using HTTPS. If not, can someone explain why the current approach is acceptable?
Yes, you are correct.
Loading executable code over a cleartext connection is vulnerable to a MITM.
Unless loaded over HTTPS where SSL/TLS can be used to encrypt and authenticate the connection, or unless the code has been signed and verified at the client then a MITM attacker could alter the input stream and cause arbitrary code to be executed on your system.
Allowing code to execute via a HTTP GET request essentially means you're allowing user-input to be directly processed by the application thus directly influencing the behavior of the application. Whilst this is often what the developer would like (say to query specific information from a database) it may be exploited in ways as you have already mentioned (E.g MITM). This is often (however I'm not directly referring to Bioconductor in any way) a bad idea as it opens the system to possible XSS/(B)SQLi attacks amongst others.
However the URL - http://bioconductor.org/biocLite.R is essentially just a file placed on the Web Server and from what is seems source() is being used to directly download it. There does not seem to be any user-input anywhere in this example so no, I wouldn't mark is as unsafe; however your analogy is indeed correct.
Note: This is simply referring to GET requests - E.g: http://example.com/artists/artist.php?id=1. Such insecurities could be exploited in many HTTP requests such as Host Header attacks, however the general concept is the same. No user-input should ever be directly processed by the application in any way.

How do I do sessions with a Flash client?

Since the Flash Player (or, more exactly, the URLLoader class) will not let you read HTTP response headers or cookies set by the server, and if you get hold of a session cookie through some workaround like reaching out to the browser and run JS, you can't send it to the server, because, among others, the Cookie header will be blocked.
Now I'm building a Flex client against an HTTP API for my server product. I control both sides, so I can get get around the above limitations, now I'm wondering how. I see the following options:
include the session token in the HTTP payload
include the token in the URL
build my own HTTP client (... with blackjack, and hookers ...) in AS, using the Socket class
I don't like (1), because I'm reimplementing functionality in my protocol that is already built into Struts, which I'm using to implement the server side. I then have to ensure that either both behave the same way, or turn off the usual way of session management and force other clients to use my protocol where they could just have the browser deal with it.
I don't like (2), because I understand that there are security concerns with this, although I'm not too sure which
I don't like (3), because it's 2010 and tons of HTTP clients have been written by smarter people than me.
So, are there other opportunities? Which of my "don't like"s do you reckon least severe? Are there ways to mitigate the problems I listed? For example, how insecure are session tokens in URLs really?
How about using the FlashVars parameter? It's designed explicitly to pass simple data into a Flash app, and it's trivial to embed the session token into the tag when the page is generated server-side. PHP-wise, it'd be something like
<embed href="movie.swf" flashvars="sessionID=<?= session_id(); ?>">blah blah blah</embed>
This way there's no session data in the movie's url that could leak via referers, and the data's already "there" so the app doesn't have to reach out and talk to the browser. And if someone's sniffing the source HTML page to get the data, they could have gotten the same information from the HTTP headers anyways.
There's more details here in the Adobe docs.
Some of your post may have eluded me, but do you know about Shared Objects:
"The SharedObject class is used to read and store limited amounts of data on a user's computer or on a server. Shared objects offer real-time data sharing between multiple client SWF files and objects that are persistent on the local computer or remote server. Local shared objects are similar to browser cookies and remote shared objects are similar to real-time data transfer devices.

What are your experiences implementing/using WebDAV?

For a current project, I was thinking of implementing WebDAV to present a virtual file store that clients can access. I have only done Google research so far but it looks like I can get away with only implementing two methods:
GET, PROPFIND
I think that this is great. I was just curious though. If I wanted to implement file uploading via:
PUT
I haven't implemented it, but it seems simple enough. My only concern is whether a progress meter will be displayed for the user if they are using standard Vista Explorer or OSX Finder.
I guess I'm looking for some stories from people experienced with WebDAV.
For many WebDAV clients and even for read only access, you will also need to support OPTIONS. If you want to support upload, PUT obviously is required, and some clients (MacOS X?) will require locking support.
(btw, RFC 4918 is the authorative source of information).
I implemented most of the WebDAV protocol in about a day's work: http://github.com/nfarina/simpledav
I wrote it in Python to run on Google App Engine, and I expect any other language would be a similar effort. All in all, it's about two pages of code.
I implemented following methods: OPTIONS, PROPFIND, MKCOL, DELETE, MOVE, PUT, GET. So far I've tested Transmit and Cyberduck and both work great with it.
Hopefully this can provide some guidance for the next person out there interested in implementing a WebDAV server. It's not a difficult protocol, it's just very dense with abstracted language like 'depth' and 'collections' and blah.
Here's the spec: http://www.webdav.org/specs/rfc4918.html
But the best way to understand the protocol is to watch a client interacting with a working server. I used Transmit to connect to Box.net's WebDAV server and monitored traffic with Charles Proxy.
Bit late to the party, but I've implemented most of the webdav protocol and I can tell with confidence you'll need to implement most of the protocol.
For OS/X you'll need class-2 WebDAV support, which includes LOCK and UNLOCK (I found it particularly difficult to fully implement the http If: header, but for Finder you'll only need a bit of that.)
These are some of my personal findings:
http://sabre.io/dav/clients/windows/
http://sabre.io/dav/clients/finder/
Hope this helps
If you run Apache Jackrabbit under, say, Tomcat, it can be configured to offer WebDAV and store uploaded files. Perhaps that will be a useful model, or even a good enough replacement for the planned implementation.
Apache Jackrabbit Support for WebDAV
Also, you may want to be aware of the BitKinex client (free 30 day trial), which I have found to be a useful tool for testing a WebDAV server.
BitKinex Home Page
We use WebDAV internally to provide a folder-based view of some file shares to clients outside of our firewall. We're using IIS6 for this.
Basically, it boils down to creating a Virtual Directory in IIS that maps to each network file system that you want to make available via WebDAV. Set it up with the content coming from "A share located on another computer" -- use the UNC path to the share for the Network Directory value. We turn on all options except Index this resource. Disable all default content pages. Turn on Windows Integrated Authentication (ours is set up using SSL as well). I have the root set up to deny access to anonymous and allow access to any authenticated user. We also have a wildcard MIME mapping (.* to application/octet-stream). Enable the WebDAV web service extension in IIS. You also need to set up the web server to delegate permissions to all the file servers you may be accessing so it can pass on the user's credentials.
If you have Macintosh clients you may also need an ISAPI filter that maps 401 to 403 errors for Darwin clients. Microsoft and Apple disagree on how to handle the situation when you don't have permission to write to a directory. Apple keeps resending the credentials on a 401 (Access Denied) error, translating it to a 403 (Forbidden) error keeps this from happening. By default Apple likes to write a "dot" file to every directory it accesses. Navigating through directories where you don't have write access will end up crashing the Finder if you don't have the filter. I have source code for this if needed.
This is all off the top of my head. It's possible (probable?) that I may have missed something. Feel free to contact me via the contact information on my web site if you have problems.
We have a webDAV servlet on our web based product.
i've found Apache Jackrabbit a good help for implementing it. however webDav is a serious P.I.T.A on the client side support.
many client implementation differ widely in their behavior and you most likely will have to support several different kinds of bugged implementations.
some examples:
MS vista only supports authentication over SSL
most windows based webDAV client assume your webdav-server/let is a sharepoint server and will act accordingly (thus not according to the webDAV protocol)
one example of this is that you NEED to allow and Unauthenticated LOCK request on the root of your server (ie yourdomain.com/ not yourdomain.com/where/webdav/should/live) else you wont be able to get write acces in MS windows.
(this is a serious P.I.T.A on a tomcat machine where your stuff usualy lives in server.com/servlets/paths/thelocation)
most(all?) versions of MS office respond different to webdav links.
i guess my point is integrating webdav support into an existing product can be a LOT harder then you would expect. and if possible i would advice to use a (semi)-standalone webDAV server such as jackrabbit webdavServer, or apache mod_webdav
I've found OS X's Finder WebDAV support to be really finicky. In order to get read-write support, you have to implement LOCK, in addition to other bits.
I wrote a WebDAV interface to a Postres database, where python modules were stored in the database in a hierarchical folder-like structure. Accessing it with cadaver worked fine, and IIRC a GUI windows browser worked too, but Finder refused to mount the share as anything other than read-only.
So, I don't know if if would give a progress bar. The files I was dealing with were small enough that a read/copy from them was virtually instantaneous. I think a copy of a large file using the Finder would probably give a progress bar - it does for any other type of mounted share.
Here is another open source project for WSGI WebDAV
http://code.google.com/p/wsgidav/
where I picked up the PyFileServer project.

How to post a file to an image hosting service in .NET?

Scenario:
localhost receives the current HttpRequest with 3 hidden inputs and a posted file. I must then forward this form data to an external image host and get the response.
See the System.Net.WebClient and related classes. You can use them to create a request to the remote server and handle the response. Also get Fiddler to help you replicate what the browser sends.
I hate doing this. It wastes my server's bandwidth and ties up IIS threads as well as using my server's CPU. It sucks and it's worth avoiding at all cost. Many services like, one that comes to mind is fliqz, provide a mechanism such that the files are uploaded directly from the client to their server (bypassing yours) and then they make a request to your server passing it various info on the query string.

Resources