Sending MIDI data over HTTP - http

How can I efficiently transfer MIDI data to remote client over HTTP (POST)?
There are no real time issues here, I just don't know how to encode the data.
Should I use plain string-pairs? I think a better way will be to just send the binary data
over the HTTP, I just don't know how to do it.
Thank You

Two options:
Encode the MIDI in base 64 and send it as the body of the POST (not sure what language you're using but most languages should have base 64 support readily available)
Go the multipart/form-data route and actually send the file
Honestly, I prefer option #1 even if it means a slight overhead on size (average ~30%). Just keeps things cleaner.

Related

Is it possible to transfer a file through CoAP?

Recently, I am doing a project and I am trying to transfer a json file to the CoAP server. I put some random values in key:value pairs such as:
{
key1: value1,
key2: [value21, value22, value23]
}
Questions:
CoAP is pretty much similar to HTTP. So, like HTTP, is it possible to transfer a json file through CoAP using POST/PUT method? If it is possible, what is the recommended directory location to put the uploaded file into the server (resource directory)?
Update:
The actual file size is about to 152.8 kB.
You can transfer arbitary JSON files using CoAP POST/PUT. Which directory would be writable depends fully on the server.
Note that for a file of that size, transfer times would be considerably longer than with HTTP, as packages are sent in lock-step (putting the first 1kB, response, next 1kB – whereas HTTP has a TCP window).
For a first shot, you may try out eclipse/californium's "simple-fileserver-example".
cf-simple-fileserver
The supports the read (GET) and uses option block 2 for that.
If you go deeper and leave the laboratory, RFC7959 blockwise may be faced several issues.
coap usually assumes, that the endpoints are identified by their ip-address (and
port). Though a blockwise transfer may last longer, that assumption may get broken. If the client is faced such a address change, a block option 2 (GET) may work, but for block option 1 (PUT), that would require special preparation.
Though such a blockwise transfer tends to last longer, it may get paused due to temporary transmission issus. That requires then a "resumption or fail" strategy. Also here GET is much easier than PUT.
Basic transmission issues on crashes. In my experience, blockwise comes with many blocks and so many MID are in use in a short period of time. If a client crashs and select a random MID on startup, the probability of an unaware MID clash is rather high. Depending on the coap servers deduplication implementation (strict according RFC7252 or advanced in awareness of that), your client may require a strategy to escape the situation, where the server retransmits unrelated messages just based on MIDs. My experience from that time was, "analyse what your get, if it smells, wait for the 247s :-)". Your client may also save the last used MID to overcome that or use a special/separate "blockwise endpoint" with disabled deduplication.
IP. FMPOV some have seen the issues left to the implementation and started to fill patents. That may require attention as well.
All together: If you use bockwise for payload of sometimes some K bytes, my experience is not that bad. But if you regulary transfer more, coap may be not the right choice.

Why use Server-Sent Events instead of simple HTTP chunked streaming?

I just read RFC-6202 and couldn't figure out benefits of using SSEs instead of simply requesting a chunked stream. As an example use case imagine you want to implement client and server, where the client wants to "subscribe" to events at the server using pure HTTP technology. What would be a drawback of the server keeping the initial HTTP request open and then occasionally sending new chunks as new events come up?
I found some argument against this kind of streaming, which include the following:
Since Transer-Encoding is hop-to-hop instead of end-to-end, a proxy in between might try to consolidate the chunks before forwarding the response to the client.
A TCP connection needs to be kept open between client and server the whole time.
However, in my understanding, both arguments also apply to SSEs. Another potential argument I could imagine is that a JavaScript browser client might have no chance to actually get the respective chunks, since re-combining them is handled on a lower level, transparent to the client. But I don't know if that's actually the case, since video streams must work in some kind of similar way, or not?
EDIT: What I've found in the meantime is that SSE basically is exactly just a chunked stream, encapsulated by a easier-to-use API, is that right?
And one more thing. This page first tells that SSE doesn't support streaming binary data (for which technical reason?) and then (at the bottom), they say that it is possible but inefficient. Could somebody please clarify that?
Yes, SSE is an API that works on top on HTTP for providing you some nice features such as automatic reconnection at client/server side or handling different types of events.
If you want to use it for streaming binary data, for sure it is not the right API. The main fact is that SSE is a text-based protocol (it's delimited by '\n's and every line starts with a text tag. If you still want to experiment with binary over SSE, a quick and dirty hack would be maybe submit the binary data in Base 64.
If you want to know more about SSE, maybe you can have a look to this simple library: https://github.com/mariomac/jeasse
You are correct SSE is a nice API on top of chunked HTTP. The API is good, and it also has support for reconnection.
With regards to your question about binary over SSE, I've got no experience of doing that. However, you can send binary over HTTP. So I see no reason why you can't do this. Although, you may end up having to convert it in JavaScript.

What does HTTP download exactly mean?

I often hear people say download with HTTP. What does it really mean technically?
HTTP stands for Hyper Text Transfer Protocol. So to understand it literally, it is meant for text transferring. And I used some sniffer tool to monitor the wire traffic. What get transferred are all ASCII characters. So I guess we have to convert whatever we want to download into characters before transferring it via HTTP. Using HTTP URL encoding? or some binary-to-text encoding schema such as base64? But that requires some decoding on the client side.
I always think it is TCP that can transfer whatever data, so I am guessing HTTP download is a mis-used word. It arise because we view a web page via HTTP and find some downloadable link on that page, and then we click it to download. In fact, browser open a TCP connection to download it. Nothing about HTTP.
Anyone could shed some light?
The complete answer to What does HTTP download exactly mean? is in its RCF 2616 specification, that you can read here: https://www.rfc-editor.org/rfc/rfc2616
Of course that's a long (but very detailed) document.
I won't replicate or summarize its content here.
In the body of your question you are more specific:
So to understand it literally, it is meant for text transferring.
I think the word "TEXT" it misleading you.
And
have to convert whatever we want to download into characters before transferring it via HTTP
is false. You don't necessarily have to.
A file, for example a JPEG image, may be sent over the wire without any kind of encoding. See for example this: When a web server returns a JPEG image (mime type image/jpeg), how is that encoded?
Note that optionally a compression or encoding may be applied (the most common case is GZIP for textual content like html, text, scripts...) but that depends on how the client and the server agree on how the data have to be transferred. That "agreement" is made with the "Accept-Encoding" and "Content-Encoding" directives in respectively the request's and the resonse's headers.
I understand the name is misleading you, but if you read Hyper Text Transfer Protocol as a Transfer Protocol with Hypertext capabilities, then it changes a bit.
When HTTP was developed there were already lots of protocols (for example, the IP protocol, which is how data are widely transmitted between servers on the internet) but there were not protocols that allowed for easy navigation between documents.
HTTP is a protocol that allows for transferring of information AND for hyper text (i.e. links) embedded within text documents. These links don't necessarily have to point to other text documents, so you can basically transmit any information using HTTP (the sender and the receiver agree on the type of document being sent using something called the mime type).
So the name still makes sense, even if you can send things other than text files.
HTTP stands for Hyper Text Transfer Protocol. So to understand it literally, it is meant for text transferring.
Yes, text transferring. Not necessarily plain text, but all text. It doesn't mean that your text has to be readable by a person, just the computer.
And I used some sniffer tool to monitor the wire traffic. What get transferred are all ASCII characters.
Your sniffer tool knows that you're a person, so it won't just present you with 0s and 1s. It converts whatever it gets to ASCII characters to make it readable to you. Alle communication over the wire is binary. The ASCII representation is just there for your sake.
So I guess we have to convert whatever we want to download into characters before transferring it via HTTP
No, not at all. Again, it's text – not necessarily plain text.
I always think it is TCP that can transfer whatever data, [...]
Here you're right. TCP does transfer all data, but in a completely different layer. To understand this, let's look at the OSI model:
When you send anything over the network, your data goes through all the different layers. First, the application layer. Here we have HTTP and several others. Everything you send over HTTP goes through the layers, down through presentation and all the way to the physical layer.
So when you say that TCP transfers the data, then you're right (HTTP could work over other transport protocols such as UDP, but that is rarely seen), but TCP transfers all your data whether you download a file from a webserver, copy a shared folder on your local network between computers or send an email.
HTTP can transfer "binary" data just fine. There is no need to convert anything.
HTTP is the protocol used to transfer your data. In your case any file you are downloading.
You can either do that(opening another type of connection) or you can send your data as raw text. What you'll send is just what you would see when opening the file in a text editor. Your browser just decides to save the file in your Downloads folder(or whereever you want it) because it sees the file type is not supportet(.rar, .zip).
If you look at OSI model, HTTP is a protocol that lives in the application layer. So when you hear that someone uses "HTTP to transfer data" they are referring to application layer protocol. An alternative would be FTP or NFS, for example.
Browser indeed opens TCP connection, when HTTP is used. TCP lives in the transport layer and provides reliable connection on top of IP.
HTTP protocol provides different verbs that can be used to retrieve and send data, GET and POST are the most common ones. Look-up REST.

What is the better performing / more compact way to send binary data to a server in WP7

Given the no direct tcp / socket limitation in Windows Phone 7 I was wondering what is the way that has the least performance overhead and/or can send it in the most compact way.
I think I can send the data as a file using HTTP (probably with an HTTPWebRequest) and encode it as Base64, but this would increase the transfer size significantly. I could use WCF but the performance overhead is going to be large as well.
Is there a way to send plain binary data without encoding it, or some faster way to do so?
Network communication on WP7 is currently limited to HTTP only.
With that in mind you're going to have to allow for the HTTP header being included as part of the transmission. You can help keep this small by not adding any additional headers youself (unless you really have to).
In terms of the body of the message then it's up to you to keep things as small as possible.
Formatting your data as JSON will typically be smaller than as XML.
If, however, your data will always be in a specific format you could just include it as raw data. i.e. if you know that the the data will have the first n bits/bytes/characters representing one thing, then next y bits/bytes/characters represent another, etc. you could format your data without any (field) identifiers. It just depends what you need.
If you want to send binary data, then certainly some people have been using raw sockets - see
Connect to attached pc from WP7 by opening a socket to localhost
However, unless you want to write your own socket server, then HTTP is very convenient. As Matt says, you can include binary content in your HTTP requests. To do this, you can use the headers:
Content-Type: application/octet-stream
Content-Transfer-Encoding: binary
Content-Length: your length
To actually set these headers, you may need to send this as a multipart message... see questions like Upload files with HTTPWebrequest (multipart/form-data)
There's some excellent sample code on AppHub forums - http://forums.create.msdn.com/forums/p/63646/390044.aspx - shows how to upload a binary photo to Facebook.
Unless your data is very large, then it may be easier to take the 4/3 hit of Base64 encoding :) (and there are other slightly more efficient encoding types too like Ascii85 - http://en.wikipedia.org/wiki/Ascii85)

How to analyse a HTTP dump?

I have a file that apparently contains some sort of dump of a keep-alive HTTP conversation, i.e. multiple GET requests and responses including headers, containing an HTML page and some images. However, there is some binary junk in between - maybe it's a dump on the TCP or even IP level (I'm not sure how to determine what it is).
Basically, I need to extract the files that were transferred. Are there any free tools I could use for this?
Use Wireshark.
Look into the file format for its dumps and convert your dump to it. Its very simple. Its called the pcap file format. Then you can open it in Wireshark no problem and it should be able to recognize the contents. Wireshark supports many dozens if not many hundred communication formats at various OSI layers (including TCP/IP/HTTP) and is great for this kind of debugging.
Wireshark will analyze on the packet level. If you want to analyze on the protocol level, I recommend Fiddler: http://www.fiddlertool.com/fiddler/
It will show you the headers sent, the responses, and will decrypt HTTPS sessions as well. And a ton more.
The Net tab in the Firebug plugin for Firefox might be of use.

Resources