J2me Httpconnection,which one is better get or post? - http

In J2ME ,Which connection type is better?Get or post.Which one is faster?which one uses less bandwidth?and which one is supported by most of the handsets?What are the advantages and disadvantages of both?

Also, see Is there a limit to the length of a GET request? which may be relevant if you plan to abuse GET.
Be aware that network operators (certainly in the UK) have caching schemes in place that may affect your traffic.

If you look at what Opera Mini does, they only use HTTP POST in their HTTP mode.
I think this is a great idea because of the following reasons:
POST's are never cached (according to HTTP spec at least) - this saves you from operator caching etc.
It seems some operators do better with POST's than GET's - this is a feeling I get from what some Nigerian users mention.
Opera has the most installations of any J2ME app in the world most probably, and if they do it, it's probably safer.
No problems with HTTP GET limits on query length.
You can use a more flexible data format if you like that uses less data (no encoding needed on the data as with GET)
I think it's much cleaner, but does require some extra work, e.g. if you are using your HTTP web logs to parse out number of requests per "?type=blah" for example, then you'll have to move that into your site's logic.

If you follow standards get should be used only for data retrieval and post for adding new items. It depends on the server handler implementation which one is faster/slower.

Related

multipart/mixed support in Netty

By browsing the source code and playing with some toy examples I got to the conclusion that Netty currently (as of 5.0.0 alpha2) supports only multipart/form-data, but not multipart/mixed, at least not as specified in RFC1342 (sec. 7.2). It looks like mixed is supported inside a part in multipart/form-data though.
Is that really the case or am I missing something?
Since I get the very same question, I post here what could be an beginning of answear...
However, the current implementation seems to have 2 limitations:
1) it supports only multipart/form-data. I would like to also be able
to use multipart/mixed, which is very similar on the wire (see
http://www.w3.org/Protocols/rfc1341/7_2_Multipart.html ). I think that
the encoder/decoder could be extended to understand multipart/mixed
and still create the same kinds of HttpDatas.
Yes, the current codec is focused on multipart/form-data. I shall be possible to extend or propose a new one (based on it probably) to enable the support of multipart/mixed.
The current codec was made based on user needs (mine in the beginning, others following). Since no one yet has requested a support for multipart/mixed, it was not coded, except for internal multipart/mixed code.
The reference is RFC1867.
As Netty loves contributions, you are more than welcome to propose yours ;-)
2) it seems that is it only possible to use efficient HttpDatas like
FileUpload if you are in multipart/form-data. I would like to be able
to add a FileUpload to the request, and by this way make the contents
of the file be the body of the request, without making it a multipart
request. I think this could be done by extending the Standard Post
Encoder to understand FileUploads.
This could a bit more complicated since it has to be done without multipart, which holds currently the FileUpload class.
Maybe a good direction could be to switch to ChunkFile or ChunkNioFile and to combine it with "your" HttpCodec or in your "HttpHandler" when doing the body request, in order to pass the content through the ChunkFile.
Hoping this helps you in the right direction...

REST design: what verb and resource name to use for a filtering service

I am developing a cleanup/filtering service that has a method that receives a list of objects serialized in xml, and apply some filtering rules to return a subset of those objects.
In a REST-ful service, what verb shall I use for such a method? I thought that GET is a natural choice, but I have to put the serialized XML in the body of the request which works but feels incorrect. The other verbs don't seem to fit semantically.
What is a good way to define that Service interface? Naming the resource /Cleanup or /Filter seems weird mainly because in the examples I see online, it is always a name rather than a verb being used for resource name.
Am I right to feel that REST services are better suited for CRUD operations and you start bending the rules in situations like this service? If yes, am I then making a wrong architectural choice.
I've pushed to develop this service in REST-ful style (as opposed to SOAP) for simplicity, but such awkward cases happen a lot and make me feel like I am missing something. Either choosing REST where it shouldn't be used or may be over-thinking some stuff that doesn't really matter? In that case, what really matters?
REST is about using HTTP the way it was designed. To be RESTful consider (title was REST design :):
URLs should be permalinks to a resource (caching benefits, storing/sharing endpoints etc...)
Because they are permalinks to a resource, having verbs in the URL is a hint that you're on the wrong path (filter is a verb).
A collection of resources can be an endpoint /foos.
If you want to filter the collection of resources, consider querystring params like ?filter= or something like ?ids=1,2,3,4,5.
A GET should not change resources. Note that 'cleanup' implies something getting deleted so be cautious of changes to resources when you do a GET. REST says a GET shouldn't alter resources. Imagine a caching server taking you're cleanup request as a GET and returning OK because t's cached. Caching servers know not to cache a POST, DELETE etc... (that's the way HTTP was designed).
Don't rule out multiple calls - for example, you may do a get to filter and get a set of resources to clean up and then could be followed by many or one DELETE verb calls to do the cleanup.
Sometimes there's a temporal resource like a transaction or a 'job' that could do work like a cleanup. Don't rule out a POST to the resource with the body containing items to cleanup up and it returns a job id. You can then query the jobid for the cleanup progress or status.
It's hard to give exact guidance because the question isn't clear but hopefully the RESTful principlies guidance and thoughts above set you on the right track. If you clarify the exact calls, I'll try and recommend APIs.
So, let's say you wanted to cleanup duplicate foos.
[GET] /foos/duplicates (or /foos?filter=duplicates)
returns a body with identifies to of foos that are duplicates. Let's say that returns 1,2,5 (could be names).
Then you could issue:
[DELETE] /foos with the body being an array containing 1,2,5 (or names if unique). the delete call is passive so even if the GET call is cached according to REST principles it's fine.
It's also possible and valid to not go the REST route such as POX or JOSN RPC over http but just realize at that point that it's not REST. And that's fine but you're not getting the benefits of REST described in fielding's thesis.
http://www.ics.uci.edu/~fielding/pubs/dissertation/top.htm
Also, read this:
http://blog.steveklabnik.com/posts/2011-07-03-nobody-understands-rest-or-http
EDIT:
After reading the comment where you clarified you're sending the server a set of objects (not persisted server side) and it returns the subset with the dupes filtered out (like a server side helper function), some options are:
Do this client/browser side if possible - why take the network roundtrip to filter out dupes out of collection?
If for some reason only the server has specific knowledge/data to determine that two items are functional equivalent (even though data not exactly the same), then consider POSTing the data set to the server with the response body containing the unique/filtered set. Even though the server isn't persisting the set, it would fall into a 'temporal' object or set and the server is modifying it. It's not conceptually a GET of server resources and caching offers no benefits in that scenario.
Last question first: What really matters is getting the job done in a way that is
Correct
As easy to use as practical
Easily maintained by future programmers (likely to include yourself)
REST is a natural fit for operations on resources where each URL matches some object that can be manipulated. It is a less natural fit for other uses, but these are more guidelines than actual rules. Others have pointed out the original dissertation on REST, but it is worth remembering that few implementations are pure.
If you have several URLs that perform these transformative kinds of functions, consider putting them in their own special URL space, like /api/filter and /api/transliterate, etc.. That will help users and maintainers alike know that certain URLs aren't REST, but are more like remote procedure calls. Posting data to these URLs results in you getting some kind of data back.
If you get stuck on specific names you should make a list of candidates, have a few beers, then choose one from the list. That's what I do when I get stuck on minutia.
SOAP is a neat protocol and has its uses, but it tends to be very heavy. Good documentation and consistency are probably more important to your budding API than using any specific technology.

Is there anything not good using POST instead of GET?

I know the difference between POST and GET, however if I used POST instead of GET, anything not good besides not up to W3C standards?
Anything inefficiency, insecurity or anything else?
See the answer from deceze:
POST requests can't be bookmarked.
In all the interviews I've done, all the teaching I've done, this is the best place to start. There's a lot more, but start with this.
Ignore anything anyone says about security. A good hacker can change POST to GET easily.
If you get this far, know that POST changes data (adds a membership, or charges a credit card), whereas GET only fetches data (searches for red shirts). The makers of browsers make their browsers behave differently for the results of POST vs GET. The results of POST have side effects that you may not want to repeat (such as adding another membership or double charging a credit card).
If you understand THIS, then read about the POST-Redirect-GET pattern, and understand it well. (Then know that GET has a URL length limit, and that you may need to resort to POST in this case.)
Never use POST requests for normal view-only pages. POST requests can't be bookmarked, send in an email or otherwise be reused. They screw up proper navigation using the browsers back/forward buttons. Only ever use them for sending data to the server in one unique operation and (usually) have the server answer with a redirect.
Other than that, they're not more or less efficient or secure than GET requests, they're just for a different purpose.

What is the difference between GET and POST in the context of creating an AJAX request?

I have an AJAX request that sends a GET:'getPendingList'. This request should return a JSON string indicating a list pending requests that need to be approved. I'm a little confused about whether I should be using a GET or POST here.
From this website:
GET requests can be cached
GET requests can remain in the browser history
GET requests can be bookmarked
GET requests can be distributed & shared
GET requests can be hacked (ask Jakob!)
So I'm thinking: I don't want the results of this GET to be cached because the pending list could change. On the other hand, using POST doesn't seem to make much sense either.
How should I think about GET and POST? I've been told that GET is the same as a 'read'; it doesn't (or shouldn't) change anything on the server side. This makes sense. What doesn't make sense is the caching part; it wouldn't work for me if someone else cached my GET request because I'm expecting the data to change.
Yahoo's best practices might be worth reading over. They recommend using GET primarily for retrieving information and using POST for updating information. In a separate item, they also recommend that do you make AJAX requests cachable where it makes sense. Check it out, it's a good read.
In short, GET requests should be idempodent. POST requests are not.
If you are altering state, use POST - otherwise use GET.
And don't forget, when talking about caching with GET/POST, that is browser-caching.
Nothing stopping you from caching the data server-side.
Also, in general - JSON calls should be POST (here's why)
So, after some IRC'ing, it looks like the best way to do this is to use GET (in this particular instance), but to prevent caching. There are two ways to do this:
1) Append a random string to your GET request.
This seems like a hacky way to do this but it sounds like it might be the only solution for IE: Prevent browser caching of jQuery AJAX call result.
2) In your response from the server, set the headers to no-cache.
It's not clear what the definitive behavior is on this. Some folks (see the previous link) claim that IE doesn't respect the no-cache directives. Other folks seem to think that this works: Internet Explorer 7 Ajax links only load once.

Are there any tools for diffing HTTP requests/responses?

I am trying to debug some problems with very picking/complex webservices where some of the clients that are theoretically making the same requests are getting different results. A debugging proxy like Charles helps a lot but since the requests are complex (lots of headers, cookies, query strings, form data, etc) and the clients create the headers in different orders (which should be perfectly acceptable), etc. it's an extremely tedious process to do manually.
I'm pondering writing something to do this myself but I was hoping someone else had already solved this problem?
As an aside does anyone know of any Charles-like debugging proxies that are completely opensource? If Charles were open source I would definitely contribute any work I did on this front back to the project. If there is something similar out there, I would much rather do this than write an separate program from scratch (especially since I imagine Charles or any analog already has all of the data structures I might need etc).
Edit:
Just to be clear -- text diffing will not work as the order of lines (e.g. headers at least) may be different and/or the order of values within lines (e.g. cookies at least) can be different and in both cases as long as the names and values and metadata are all the same, the different ordering should not cause requests that are otherwise the same to be considered different.
Fiddler has such an option, if you have WinDiff in your path. I don't know though if it will suit your needs, because at first glance it's jus doing text comparisions. But perhaps it normalizes the sessions before that, so I can't say.
If there's nothing purpose built for the job, you can use packet capture to get the message content saved to a text file (something that inserts itself in the IP stack like CommView). The you can text diff the results for different messages.
Can the open-source proxy Squid maybe help?

Resources