I am doing file transfers, but the filereference API doesn't support file chunking. Has anyone done this before? For example, I would like to be able to upload a 1 gig file from an AIR client to a custom PHP/Java/etc. service.
It seems that all you should have to do is use the upload() routine. The php or java service should be doing the chunking.
var myHugeFile = new air.File('myHugeLocal.file');
myHugeFile.upload(new URLRequest("http://your.website.com/uploadchunker.php"));
There is a much more elaborate example of using filereference in the adobe learning area here:
http://www.adobe.com/devnet/air/flex/articles/uploading_air_app_to_server.html
Three options jump out on this:
Use an FTP service that supports resumable transfers, assuming flash supports this as well. Maybe not an option if you are wanting to communicate with a custom service of your own.
Leverage the http file part header support. Only applicable if AIR allows access to the appropriate http headers (content-range & content-length). This is what BITS does. Probably a bit harder to implement.
Hand roll your own TCP or UDP protocol exchange. Not for the faint of heart. I'd look in the OSS space before going this route.
I think FileReference does chunk, at least that is what I have observed. Using a tool like Fiddler, you can watch it in action. If you analyze the outgoing headers of a FileReference upload, they are chunked.
If resumes are what you're after, I cannot say how you would go about that with FileReference. I have uploaded small files in generic posts, but that requires the flash/air client to load all bytes into the app. In Air that may or may not crash flash with a 1GB file (depends on your system I guess).
Related
I am totally newbie in c#, .net core 2 and protocol buffers but I have to work with thoses 3 technologies for a personnel project (server/client architecture). I have some questions about serialization/deserialization in multiple message.
I have already see this:
https://developers.google.com/protocol-buffers/docs/techniques#streaming
So I know it's need a special technique. After a little google session I found something about put and pop limits in c++ but I haven't see docs on c#.
I have an another question, does google protocol buffers handle nicely reading ? My sockets are monitored with select so when I want to read my messages (using https://developers.google.com/protocol-buffers/docs/reference/csharp/class/google/protobuf/message-parser#class_google_1_1_protobuf_1_1_message_parser_1a110e5d9bc61837e369e5deb093f59161)
I am not sure how protobuf will stop reading (I don't want to read beyond the data available on the socket because it will make my server blocking...)
Does it manage it ? Thank you...
Standalone protobufs aren't delimited in any way - they don't encode their length and have no fixed start nor end.
But the API gives you some tools for sending and storing multiple messages - specifically, you can use WriteDelimitedTo() to write multiple protobufs to some output, and then read them using parseDelimitedFrom()
I am starting to work on a project where I need to stream Twitter data using PowerTrack/GNIP and I have to be honest when I say I am very very inexperienced when it comes to networks and I have absolutely no knowledge when it comes to Data Stream (HTTP), how they work etc.
Are there any resources out there that go through all of this in simple terms? I would love to be able to map Data streaming process in my head before I start looking at APIs etc.
Thanks
Take a look at the following two resources which give a good overview of video streaming. Video streaming has probably more background available and should help you understand the concepts:
https://developer.apple.com/library/ios/documentation/NetworkingInternet/Conceptual/StreamingMediaGuide/Introduction/Introduction.html
http://www.jwplayer.com/blog/what-is-video-streaming/
In very simple terms, streaming breaks a large file or live stream into chunks, and sends those chunks one after another to a client (e.g. browser). The client can generally request a start point for content which is not a live stream. In the background this generally works by the client sending requests for each individual chunk (rather than just one request with multiple responses).
The advantage of the multiple request approach is that you know the client is actually still interested (e.g. the user has not browser to another page etc) and for video and audio etc the client can dynamically request different bandwidth files depending on the current network connection - see: http://en.wikipedia.org/wiki/Adaptive_bitrate_streaming.
Twitter do have a streaming page also, but you have probably already seen this:
https://dev.twitter.com/streaming/overview
Can anyone tell me why the Range, header is restricted in the Flash player?
I want to be able to pause and resume downloads in my flex application, but I get a RTE when trying to set the Range header.
Error #2096: The HTTP request header Range cannot be set via ActionScript.
I imagine there isn't going to be a work around client side, but expect there is a way you can get a server to change the name for the range header to something else...
Would like to know Adobe's reason for this though, hopefully it's not just to sell more copies of FMS :p
I just discovered exactly the same issue with the Range header while attempting to add ranged GET requests to our REST layer in Flex. Range is on the "blacklist" and the Flash Player simply won't send it.
Flash/Flex headers ate my brain a year or so back (verveguy.blogspot.com) but this is the last straw.
The solution I am now going to finally embrace is to use the open source as3httpclientlib and just abandon the Flash HTTP stack. We've used it successfully for some minor parts of our app (specifically, for talking to the JIRA API) so it's time to beat it into submission for all HTTP traffic.
For your specific problem, you could certainly switch to a custom header, say X-Range. This assumes you have control of the server side code and that you also have a crossdomain.xml policy file that allows headers. (Blacklisted headers are the first set to be culled. After that, the Flash player checks the crossdomain.xml advertised by the server you're talking to see whether it allows specific (or all other) headers)
Hope this helps
Here are a couple of Adobe Tech Notes that explain their reasoning:
Arbitrary headers are not sent from Flash Player to a remote domain
ActionScript error when an HTTP send action contains certain headers (Flash Player)
I am trying to design a system for something like this with ASP.net/C#.
The users pay for downloading some content (files- mp3s/PDFs,doc etc).I should be able to track the number of bytes downloaded by the user. If the number of bytes downloaded match the number of bytes on the server, I should set a flag in DB (telling that the download was successful and prevent them from downloading the file again/asking them to pay for the download again). If the download was incomplete, they should be able to download the file again without paying for it again(since the flag will not be set).
Is there any way to keep track of the number of bytes successfully downloaded by the client ?
Also when I see a file size in my WinXP machine, I see two sizes(size,size on disk). Which one should I consider ? And will it differ from one OS to another ?
You can easily measure data passed to the client in ASP.NET assuming you replace a direct IIS-controlled download with your own, which would go something like this:
while (context.Response.IsClientConnected) {
bytesRead = ReadFileChunkAsByteArrayWIthOffsetOrWhatever(buffer, offset);
context.Response.OutputStream.Write(buffer, 0, bytesRead);
context.Response.Flush();
offset += bytesRead;
if (bytesRead != bufferSize)
break;
}
It's complicated to make this 100% reliable from within ASP, but it can be done. You pretty much have to account for every possible failure point and react accordingly.
The problem though is still - as someone mentioned above - that it's impossible to know that the client received the data. If money is involved in this transaction, that can get to be a problem really quickly.
For that reason, the best approach would be to use a custom downloader client, like the one Amazon uses for MP3 file purchases. That way you're not subjecting either yourself or your customers to the vagaries of moving monetized bits over something as unreliable as HTTP.
you can create an asp.net handler that serves the file ( for asp.net mvc u can do a result action instead ... this is what I'm using). Make sure it supports resumable downloads.
from the you can track the bytes served.
Ps. this incurs a performance overhead vs. letting IIS serve it
update 1: I used something pretty similar to this http://dotnetslackers.com/articles/aspnet/Range-Specific-Requests-in-ASP-NET.aspx ... and the article has a pretty clear explanation on what's inside it. You probably can use that one as is, see the example in that post.
You could try looking into HTTP reponse codes (i.e: 200, 404 etc) - the client and server will be exchanging http headers so that they know what's going on - you should be able to monitor these to see if the reponses was successful (not sure - but you should be able to).
With regards to file size - I would try experiments on files with 'known' sizes, compare what the Http Logs tell you with what file explorer tells you.
Also, I've seen tools/wodgets that report file upload progress - so you're right you should be able to to the same in reverse, I guess. You could try looking at file upload code examples and tutorials - you might get some hints. I can't think of any off the top of my head - sorry.
To do custom byte serving like this, you will need to implement your own http handler.
This handler should do the following:
Implement some kind of authentication on the http handler, so you know who you are dealing with.
Then you will need to implement some kind of logging for files requested and files allowed to be downloaded.
Implement etags and expires headers for client side caching.
Server side caching
Deflate, gzip compression
If you want to support resumable downloads, you will need to implement 206 partial responses. This is essential for any kind of streaming and serving pdfs.
So you should be handling the following http headers:
ETag
Expires
Accept-Ranges
Range
If-Range
Last-Modified
If-Match
If-None-Match
If-Modified-Since
If-Unmodified-Since
Unless-Modified-Since
If you are looking for a sample implementations of http handlers check out:
http://code.google.com/p/talifun-web/wiki
It has a static file handler that implements all the above http headers, client side and server side caching and even compression.
There is also a log module and an authorization module that should go a long way into how to implement authentication and logging.
The size you want is the size (not the size on disk). Size on disk includes extra space that is taken up by fitting into the 4K block size of the partition. The size is the exact number of bits in the file.
I don't believe there is a good way to tell that a download has been completed. Response.TransmitFile is probably the best method for sending the file securely. But I don't believe it has anything that will tell you if the user actually recieved the file.
I don't know about the business this is supporting, but I can't think of a legitimate business where users would tolerate a single download per purchase model, and with the abiguity of the standard HTTP request/response model does not lend itself to making an accurate client side reciever. Not to mention this model could be eaisyly hacked by sending a failed response on reciept of the last packet.
I think using somthing like download windows (2hrs after purchase) and then lock it to an IP after the first request would accomplish the same result and result in alot less user issues and support calls. Also unless the file has some sort of stringent DRM, allowing the user persisten access based on their loggin is most likely the appropriate business model, because once they get the file they can copy it as many times as they like.
Look at DVD or Blu-Ray, no amount of copy protection or access controls will save your files from pirates, so make things easy for legitimate users.
In asp.net application, how its possible to download all png,css JavaScript and other resources parallel.
Because i am monitoring using Fiddler and found that content is downloaded one after another.
That is actually more of a browser (client) behaviour in accordance to the specification in HTTP 1.1. The guideline is to limit simultaneous downloads to two per hostname.
http://www.yuiblog.com/blog/2007/04/11/performance-research-part-4/
While you may be able to alter your browser's settings to download more per hostname, that is only your machine and not that of others' in the Internet wilderness. One way to trick clients in downloading more simulatenously is to designate your web resources into different hostnames, like images stored in http://images.yoursite.com. But you may wanna to test this and balance it out, as per the article's suggestion.
You can try AJAX for that as usually there are 5 allowed server/client http connections you could theoretically use them all at once.
However I guess you will take little advantage of this, unless you have really big (or many) css and javascript files.
Not sure if this will work on images or other files.