What is the better performing / more compact way to send binary data to a server in WP7 - networking

Given the no direct tcp / socket limitation in Windows Phone 7 I was wondering what is the way that has the least performance overhead and/or can send it in the most compact way.
I think I can send the data as a file using HTTP (probably with an HTTPWebRequest) and encode it as Base64, but this would increase the transfer size significantly. I could use WCF but the performance overhead is going to be large as well.
Is there a way to send plain binary data without encoding it, or some faster way to do so?

Network communication on WP7 is currently limited to HTTP only.
With that in mind you're going to have to allow for the HTTP header being included as part of the transmission. You can help keep this small by not adding any additional headers youself (unless you really have to).
In terms of the body of the message then it's up to you to keep things as small as possible.
Formatting your data as JSON will typically be smaller than as XML.
If, however, your data will always be in a specific format you could just include it as raw data. i.e. if you know that the the data will have the first n bits/bytes/characters representing one thing, then next y bits/bytes/characters represent another, etc. you could format your data without any (field) identifiers. It just depends what you need.

If you want to send binary data, then certainly some people have been using raw sockets - see
Connect to attached pc from WP7 by opening a socket to localhost
However, unless you want to write your own socket server, then HTTP is very convenient. As Matt says, you can include binary content in your HTTP requests. To do this, you can use the headers:
Content-Type: application/octet-stream
Content-Transfer-Encoding: binary
Content-Length: your length
To actually set these headers, you may need to send this as a multipart message... see questions like Upload files with HTTPWebrequest (multipart/form-data)
There's some excellent sample code on AppHub forums - http://forums.create.msdn.com/forums/p/63646/390044.aspx - shows how to upload a binary photo to Facebook.
Unless your data is very large, then it may be easier to take the 4/3 hit of Base64 encoding :) (and there are other slightly more efficient encoding types too like Ascii85 - http://en.wikipedia.org/wiki/Ascii85)

Related

Why use Server-Sent Events instead of simple HTTP chunked streaming?

I just read RFC-6202 and couldn't figure out benefits of using SSEs instead of simply requesting a chunked stream. As an example use case imagine you want to implement client and server, where the client wants to "subscribe" to events at the server using pure HTTP technology. What would be a drawback of the server keeping the initial HTTP request open and then occasionally sending new chunks as new events come up?
I found some argument against this kind of streaming, which include the following:
Since Transer-Encoding is hop-to-hop instead of end-to-end, a proxy in between might try to consolidate the chunks before forwarding the response to the client.
A TCP connection needs to be kept open between client and server the whole time.
However, in my understanding, both arguments also apply to SSEs. Another potential argument I could imagine is that a JavaScript browser client might have no chance to actually get the respective chunks, since re-combining them is handled on a lower level, transparent to the client. But I don't know if that's actually the case, since video streams must work in some kind of similar way, or not?
EDIT: What I've found in the meantime is that SSE basically is exactly just a chunked stream, encapsulated by a easier-to-use API, is that right?
And one more thing. This page first tells that SSE doesn't support streaming binary data (for which technical reason?) and then (at the bottom), they say that it is possible but inefficient. Could somebody please clarify that?
Yes, SSE is an API that works on top on HTTP for providing you some nice features such as automatic reconnection at client/server side or handling different types of events.
If you want to use it for streaming binary data, for sure it is not the right API. The main fact is that SSE is a text-based protocol (it's delimited by '\n's and every line starts with a text tag. If you still want to experiment with binary over SSE, a quick and dirty hack would be maybe submit the binary data in Base 64.
If you want to know more about SSE, maybe you can have a look to this simple library: https://github.com/mariomac/jeasse
You are correct SSE is a nice API on top of chunked HTTP. The API is good, and it also has support for reconnection.
With regards to your question about binary over SSE, I've got no experience of doing that. However, you can send binary over HTTP. So I see no reason why you can't do this. Although, you may end up having to convert it in JavaScript.

What does HTTP download exactly mean?

I often hear people say download with HTTP. What does it really mean technically?
HTTP stands for Hyper Text Transfer Protocol. So to understand it literally, it is meant for text transferring. And I used some sniffer tool to monitor the wire traffic. What get transferred are all ASCII characters. So I guess we have to convert whatever we want to download into characters before transferring it via HTTP. Using HTTP URL encoding? or some binary-to-text encoding schema such as base64? But that requires some decoding on the client side.
I always think it is TCP that can transfer whatever data, so I am guessing HTTP download is a mis-used word. It arise because we view a web page via HTTP and find some downloadable link on that page, and then we click it to download. In fact, browser open a TCP connection to download it. Nothing about HTTP.
Anyone could shed some light?
The complete answer to What does HTTP download exactly mean? is in its RCF 2616 specification, that you can read here: https://www.rfc-editor.org/rfc/rfc2616
Of course that's a long (but very detailed) document.
I won't replicate or summarize its content here.
In the body of your question you are more specific:
So to understand it literally, it is meant for text transferring.
I think the word "TEXT" it misleading you.
And
have to convert whatever we want to download into characters before transferring it via HTTP
is false. You don't necessarily have to.
A file, for example a JPEG image, may be sent over the wire without any kind of encoding. See for example this: When a web server returns a JPEG image (mime type image/jpeg), how is that encoded?
Note that optionally a compression or encoding may be applied (the most common case is GZIP for textual content like html, text, scripts...) but that depends on how the client and the server agree on how the data have to be transferred. That "agreement" is made with the "Accept-Encoding" and "Content-Encoding" directives in respectively the request's and the resonse's headers.
I understand the name is misleading you, but if you read Hyper Text Transfer Protocol as a Transfer Protocol with Hypertext capabilities, then it changes a bit.
When HTTP was developed there were already lots of protocols (for example, the IP protocol, which is how data are widely transmitted between servers on the internet) but there were not protocols that allowed for easy navigation between documents.
HTTP is a protocol that allows for transferring of information AND for hyper text (i.e. links) embedded within text documents. These links don't necessarily have to point to other text documents, so you can basically transmit any information using HTTP (the sender and the receiver agree on the type of document being sent using something called the mime type).
So the name still makes sense, even if you can send things other than text files.
HTTP stands for Hyper Text Transfer Protocol. So to understand it literally, it is meant for text transferring.
Yes, text transferring. Not necessarily plain text, but all text. It doesn't mean that your text has to be readable by a person, just the computer.
And I used some sniffer tool to monitor the wire traffic. What get transferred are all ASCII characters.
Your sniffer tool knows that you're a person, so it won't just present you with 0s and 1s. It converts whatever it gets to ASCII characters to make it readable to you. Alle communication over the wire is binary. The ASCII representation is just there for your sake.
So I guess we have to convert whatever we want to download into characters before transferring it via HTTP
No, not at all. Again, it's text – not necessarily plain text.
I always think it is TCP that can transfer whatever data, [...]
Here you're right. TCP does transfer all data, but in a completely different layer. To understand this, let's look at the OSI model:
When you send anything over the network, your data goes through all the different layers. First, the application layer. Here we have HTTP and several others. Everything you send over HTTP goes through the layers, down through presentation and all the way to the physical layer.
So when you say that TCP transfers the data, then you're right (HTTP could work over other transport protocols such as UDP, but that is rarely seen), but TCP transfers all your data whether you download a file from a webserver, copy a shared folder on your local network between computers or send an email.
HTTP can transfer "binary" data just fine. There is no need to convert anything.
HTTP is the protocol used to transfer your data. In your case any file you are downloading.
You can either do that(opening another type of connection) or you can send your data as raw text. What you'll send is just what you would see when opening the file in a text editor. Your browser just decides to save the file in your Downloads folder(or whereever you want it) because it sees the file type is not supportet(.rar, .zip).
If you look at OSI model, HTTP is a protocol that lives in the application layer. So when you hear that someone uses "HTTP to transfer data" they are referring to application layer protocol. An alternative would be FTP or NFS, for example.
Browser indeed opens TCP connection, when HTTP is used. TCP lives in the transport layer and provides reliable connection on top of IP.
HTTP protocol provides different verbs that can be used to retrieve and send data, GET and POST are the most common ones. Look-up REST.

Sending MIDI data over HTTP

How can I efficiently transfer MIDI data to remote client over HTTP (POST)?
There are no real time issues here, I just don't know how to encode the data.
Should I use plain string-pairs? I think a better way will be to just send the binary data
over the HTTP, I just don't know how to do it.
Thank You
Two options:
Encode the MIDI in base 64 and send it as the body of the POST (not sure what language you're using but most languages should have base 64 support readily available)
Go the multipart/form-data route and actually send the file
Honestly, I prefer option #1 even if it means a slight overhead on size (average ~30%). Just keeps things cleaner.

Remoteobject (AMFPHP)or HttpService? Which is the best to choose from?

Please explain me which of two is secure, powerful one, fast enough on sending to and receiving requested data from the server in Flex!
I prefer working with remoting object with AMFPHP rather than HTTPService
Check out James Ward Census Application for information on performance and data transfer size.
For performance, use RemoteObject.
However, since you ask for "Fast enough" i really depends on your application and the amount of data.
Either channel is secure as the other. HTTPS would make it more secure. I don't think anything can prevent packet sniffers from getting at the data in transit.
AMF(Remote Objects) – Why its Better
It’s a binary protocol .
But its still encapsulated in HTTP so there is no concern with
firewalls or client issues and we can use our normal web debugging
methods.
HTTP headers with a binary body.
HTTP/1.1 200 OK
Date: Tue, 28 Jun 2011 12:55:26 GMT
Content-Type: application/x-amf
Server: stackoverflow.com
(binaryamf body here)
Because it is binary, it can use pointers.
-Circular References
-Objects only transmitted one time.
Common Strings, for example, are only sent once, then all other
references to that string only contains a pointer, instead of being
re-transmitted.
Same behavior for all Objects.
Its Transmitted binary format (spec) is the same format as how the
Flash player stores its objects into memory.
-No de-marshaling (expensive)
-No de-serializing
-Bits from HTTP stream flow ~> into Flash Player Memory
James Ward Census Data
-A flex application that is built to use several different transport
mechanisms while transferring the same data. Show comparative timings
of each stage of data transfer.
-James Ward Census
AMF is supposedly always going to be faster, but HttpService using XML or JSON is used probably more often. If its only a small project, or if its going to be using web based api's that may be used by other technologies, then maybe httpService is quicker to get implemented.
If you want to quickly try out AMF PHP using ZendAMF, I put up a tutorial and demo here:
http://bbishop.org/blog/?p=441
Includes all the php and config file details, as well as server setup.
Security has little to do with it because. AMF will save you bandwidth costs by using a binary protocol instead of a string one. It's an additional layer of obfuscation but there are some packet readers that will read AMF anyways. If you plan to have alternates to desktop client, say mobile, going AMF may lock you out because those other clients may not be Flash Player based. The advantage of going non-AMF is that you can open the possibility of other clients but the trade off is if the app is bandwidth intensive, HTTP requests with string bodies will weigh heavier than the AMF Binaries.

How to analyse a HTTP dump?

I have a file that apparently contains some sort of dump of a keep-alive HTTP conversation, i.e. multiple GET requests and responses including headers, containing an HTML page and some images. However, there is some binary junk in between - maybe it's a dump on the TCP or even IP level (I'm not sure how to determine what it is).
Basically, I need to extract the files that were transferred. Are there any free tools I could use for this?
Use Wireshark.
Look into the file format for its dumps and convert your dump to it. Its very simple. Its called the pcap file format. Then you can open it in Wireshark no problem and it should be able to recognize the contents. Wireshark supports many dozens if not many hundred communication formats at various OSI layers (including TCP/IP/HTTP) and is great for this kind of debugging.
Wireshark will analyze on the packet level. If you want to analyze on the protocol level, I recommend Fiddler: http://www.fiddlertool.com/fiddler/
It will show you the headers sent, the responses, and will decrypt HTTPS sessions as well. And a ton more.
The Net tab in the Firebug plugin for Firefox might be of use.

Resources