Remoteobject (AMFPHP)or HttpService? Which is the best to choose from? - apache-flex

Please explain me which of two is secure, powerful one, fast enough on sending to and receiving requested data from the server in Flex!
I prefer working with remoting object with AMFPHP rather than HTTPService

Check out James Ward Census Application for information on performance and data transfer size.
For performance, use RemoteObject.
However, since you ask for "Fast enough" i really depends on your application and the amount of data.
Either channel is secure as the other. HTTPS would make it more secure. I don't think anything can prevent packet sniffers from getting at the data in transit.

AMF(Remote Objects) – Why its Better
It’s a binary protocol .
But its still encapsulated in HTTP so there is no concern with
firewalls or client issues and we can use our normal web debugging
methods.
HTTP headers with a binary body.
HTTP/1.1 200 OK
Date: Tue, 28 Jun 2011 12:55:26 GMT
Content-Type: application/x-amf
Server: stackoverflow.com
(binaryamf body here)
Because it is binary, it can use pointers.
-Circular References
-Objects only transmitted one time.
Common Strings, for example, are only sent once, then all other
references to that string only contains a pointer, instead of being
re-transmitted.
Same behavior for all Objects.
Its Transmitted binary format (spec) is the same format as how the
Flash player stores its objects into memory.
-No de-marshaling (expensive)
-No de-serializing
-Bits from HTTP stream flow ~> into Flash Player Memory
James Ward Census Data
-A flex application that is built to use several different transport
mechanisms while transferring the same data. Show comparative timings
of each stage of data transfer.
-James Ward Census

AMF is supposedly always going to be faster, but HttpService using XML or JSON is used probably more often. If its only a small project, or if its going to be using web based api's that may be used by other technologies, then maybe httpService is quicker to get implemented.
If you want to quickly try out AMF PHP using ZendAMF, I put up a tutorial and demo here:
http://bbishop.org/blog/?p=441
Includes all the php and config file details, as well as server setup.

Security has little to do with it because. AMF will save you bandwidth costs by using a binary protocol instead of a string one. It's an additional layer of obfuscation but there are some packet readers that will read AMF anyways. If you plan to have alternates to desktop client, say mobile, going AMF may lock you out because those other clients may not be Flash Player based. The advantage of going non-AMF is that you can open the possibility of other clients but the trade off is if the app is bandwidth intensive, HTTP requests with string bodies will weigh heavier than the AMF Binaries.

Related

Is it possible to transfer a file through CoAP?

Recently, I am doing a project and I am trying to transfer a json file to the CoAP server. I put some random values in key:value pairs such as:
{
key1: value1,
key2: [value21, value22, value23]
}
Questions:
CoAP is pretty much similar to HTTP. So, like HTTP, is it possible to transfer a json file through CoAP using POST/PUT method? If it is possible, what is the recommended directory location to put the uploaded file into the server (resource directory)?
Update:
The actual file size is about to 152.8 kB.
You can transfer arbitary JSON files using CoAP POST/PUT. Which directory would be writable depends fully on the server.
Note that for a file of that size, transfer times would be considerably longer than with HTTP, as packages are sent in lock-step (putting the first 1kB, response, next 1kB – whereas HTTP has a TCP window).
For a first shot, you may try out eclipse/californium's "simple-fileserver-example".
cf-simple-fileserver
The supports the read (GET) and uses option block 2 for that.
If you go deeper and leave the laboratory, RFC7959 blockwise may be faced several issues.
coap usually assumes, that the endpoints are identified by their ip-address (and
port). Though a blockwise transfer may last longer, that assumption may get broken. If the client is faced such a address change, a block option 2 (GET) may work, but for block option 1 (PUT), that would require special preparation.
Though such a blockwise transfer tends to last longer, it may get paused due to temporary transmission issus. That requires then a "resumption or fail" strategy. Also here GET is much easier than PUT.
Basic transmission issues on crashes. In my experience, blockwise comes with many blocks and so many MID are in use in a short period of time. If a client crashs and select a random MID on startup, the probability of an unaware MID clash is rather high. Depending on the coap servers deduplication implementation (strict according RFC7252 or advanced in awareness of that), your client may require a strategy to escape the situation, where the server retransmits unrelated messages just based on MIDs. My experience from that time was, "analyse what your get, if it smells, wait for the 247s :-)". Your client may also save the last used MID to overcome that or use a special/separate "blockwise endpoint" with disabled deduplication.
IP. FMPOV some have seen the issues left to the implementation and started to fill patents. That may require attention as well.
All together: If you use bockwise for payload of sometimes some K bytes, my experience is not that bad. But if you regulary transfer more, coap may be not the right choice.

Why use Server-Sent Events instead of simple HTTP chunked streaming?

I just read RFC-6202 and couldn't figure out benefits of using SSEs instead of simply requesting a chunked stream. As an example use case imagine you want to implement client and server, where the client wants to "subscribe" to events at the server using pure HTTP technology. What would be a drawback of the server keeping the initial HTTP request open and then occasionally sending new chunks as new events come up?
I found some argument against this kind of streaming, which include the following:
Since Transer-Encoding is hop-to-hop instead of end-to-end, a proxy in between might try to consolidate the chunks before forwarding the response to the client.
A TCP connection needs to be kept open between client and server the whole time.
However, in my understanding, both arguments also apply to SSEs. Another potential argument I could imagine is that a JavaScript browser client might have no chance to actually get the respective chunks, since re-combining them is handled on a lower level, transparent to the client. But I don't know if that's actually the case, since video streams must work in some kind of similar way, or not?
EDIT: What I've found in the meantime is that SSE basically is exactly just a chunked stream, encapsulated by a easier-to-use API, is that right?
And one more thing. This page first tells that SSE doesn't support streaming binary data (for which technical reason?) and then (at the bottom), they say that it is possible but inefficient. Could somebody please clarify that?
Yes, SSE is an API that works on top on HTTP for providing you some nice features such as automatic reconnection at client/server side or handling different types of events.
If you want to use it for streaming binary data, for sure it is not the right API. The main fact is that SSE is a text-based protocol (it's delimited by '\n's and every line starts with a text tag. If you still want to experiment with binary over SSE, a quick and dirty hack would be maybe submit the binary data in Base 64.
If you want to know more about SSE, maybe you can have a look to this simple library: https://github.com/mariomac/jeasse
You are correct SSE is a nice API on top of chunked HTTP. The API is good, and it also has support for reconnection.
With regards to your question about binary over SSE, I've got no experience of doing that. However, you can send binary over HTTP. So I see no reason why you can't do this. Although, you may end up having to convert it in JavaScript.

What's the point of streaming over HTTP (MPEG-DASH)?

I was reading about streaming-over-HTTP technologies such as MPEG-DASH but don't really get the point. As I understand it, such protocols divide up the binary data in the media file into chunks, wraps each chunk in some kind of metadata, then stuffs these into HTTP messages and sends them to the client.
But what's the point of implementing this on top of HTTP instead of just implementing/creating a separate application layer protocol? Doesn't this just introduce more overhead and unnecessarily complicate the encoding/decoding process?
Transporting stuff over HTTP isn't done for the sake of efficiency since it's obviously inefficient. HTTP itself (at least until HTTP 2.0) is horribly inefficient.
The main reasons for using HTTP are simplicity, interoperability and re-usability. It's simple to understand and implement, it already exists in both servers and clients; plus it's well known by networks so it can easily pass through firewalls.

What is the better performing / more compact way to send binary data to a server in WP7

Given the no direct tcp / socket limitation in Windows Phone 7 I was wondering what is the way that has the least performance overhead and/or can send it in the most compact way.
I think I can send the data as a file using HTTP (probably with an HTTPWebRequest) and encode it as Base64, but this would increase the transfer size significantly. I could use WCF but the performance overhead is going to be large as well.
Is there a way to send plain binary data without encoding it, or some faster way to do so?
Network communication on WP7 is currently limited to HTTP only.
With that in mind you're going to have to allow for the HTTP header being included as part of the transmission. You can help keep this small by not adding any additional headers youself (unless you really have to).
In terms of the body of the message then it's up to you to keep things as small as possible.
Formatting your data as JSON will typically be smaller than as XML.
If, however, your data will always be in a specific format you could just include it as raw data. i.e. if you know that the the data will have the first n bits/bytes/characters representing one thing, then next y bits/bytes/characters represent another, etc. you could format your data without any (field) identifiers. It just depends what you need.
If you want to send binary data, then certainly some people have been using raw sockets - see
Connect to attached pc from WP7 by opening a socket to localhost
However, unless you want to write your own socket server, then HTTP is very convenient. As Matt says, you can include binary content in your HTTP requests. To do this, you can use the headers:
Content-Type: application/octet-stream
Content-Transfer-Encoding: binary
Content-Length: your length
To actually set these headers, you may need to send this as a multipart message... see questions like Upload files with HTTPWebrequest (multipart/form-data)
There's some excellent sample code on AppHub forums - http://forums.create.msdn.com/forums/p/63646/390044.aspx - shows how to upload a binary photo to Facebook.
Unless your data is very large, then it may be easier to take the 4/3 hit of Base64 encoding :) (and there are other slightly more efficient encoding types too like Ascii85 - http://en.wikipedia.org/wiki/Ascii85)

Understanding REST: REST as a high volume transport?

I'm designing a system that will need to move multi-GB backup images over TCP, and I'm looking at REST as an alternative to ONC RPC.
For example, I might have
POST http://site/backups/image1
where image1 is an 50GB file whose data is contained in the HTTP body.
My question: is this within the scope of what REST is meant for? Is it inappropriate to move massive files over HTTP? My preliminary testing shows that the performance isn't too bad, and I like the clean, debuggable protocol, as opposed to a custom ONC RPC server. But is this overloading the role of a webserver?
Thanks,
-Steve
HTTP has about the same overheads as FTP.
An HTTP server if often asked to do more work than an FTP server. But otherwise, using HTTP to send a large file is about the same as using FTP.
The only consideration is making sure your web server and web application framework are configured to do this kind of thing without needlessly expanding the entire 50Gb file inside Apache.
Steve,
HTTP has a look-before-you-leap 'feature' that allows the client to ask the server whether it will accept the data submission before it actually sends out the data. I'd look into using this to avoid transferring GBs of data only to find out that the server is currently not willing to handle them. Look at the HTTP Expect header and 100 Continue status codes.
Also, you can use FTP within a RESTful approach, IOW, think along the lines of
<backup-store href="ftp://example.org/site/backup/images/"/>
and make your clients understand the ftp URI scheme.
Finally, the T in HTTP means transfer and not transport - an important distinction to make because the former is an application semantic (HTTP is an application protocol) and the latter is a not.
HTH,
Jan
REST has nothing to do with how large your data is or which method you use to transport it.

Resources