What HTTP response statuses mean that a client really got desired content?
The simple answer would be that all 2XX statuses mean that, but there is one notable exception I'm aware of: 304 NOT MODIFIED. 304 means that a client already has the desired content, i. e. a client really got content.
Other redirects don't mean that since a client have to do another request.
What is the full list of the statuses that mean a client really got content? And by that I mean, what statuses normally make me think a client saw something useful?
P. S. I understand that my "really got content" is pretty informal.
Related
I am trying to test the 278I Prior Authorization request with Availity and receiving a TA1 error in response.
I am using the following document for reference on the 278I request.
EDI-278i-Companion-Guide-Non-HIPPA-005010X215.pdf
The format of the request is as follows:
278 Inquiry Global ID - Request
Error from Availity in response
As per the documentation in few online articles and documents related to EDI Acknowledgement Errors, the line TA17687586542211220040R*013~ means: Security Information Value is missing or incorrect.
I have been trying to figure out the issue reason here for the past two days and couldn't get it to work. It would be really great if anyone could help or guide me to solve this issue or provide some useful information related to this issue.
Ensure that the ISA's length is exactly 106 characters and that both ISA2 and ISA4 contain exactly 10 characters/blanks. Also, ensure that there are no hidden symbols, such as tabs, CR/LFs anywhere inside the contents. CR/LFs are okay only after the segment terminator.
I was practicing some HTTP Post requests with a post-test server (ptsv2.com).
After a successful post, I can see the request body and headers etc. on the site.
There is one value that I am unsure about:
X-Cloud-Trace-Context
What is this for?? I understand the rest of header data, but I can't seem to find a good explanation on that one part.
Thank you.
X-Cloud-Trace-Context is a header from Google Cloud Platform to identify the current request.
It is particularly useful for correlating logs. As per documentation (https://cloud.google.com/appengine/docs/standard/python3/writing-application-logs), correlation has to be done manually. This post explains in detail how to do it: https://code.luasoftware.com/tutorials/google-app-engine/app-engine-standard-python37-correlate-application-log-with-request-log/
When I loaded a web page after submitting something into a javascript-type "form" I looked at the HTTP headers with the Firefox add-on. Everything in the headers make sense except for 16 random characters in the middle that always came after the word "callback". I don't know what they mean or where they come from.
These are all from SEPERATE "form submissions" if you will.
"http://www.locationary.com/access/proxy.jsp?ACTION_TOKEN=proxy_jsp$JspView$SaveAction&callback=callback8FDRUTrnQgGI2iuZ&inPlaceID=1003168722&xxx_c_1_f_987=http%3A%2F%2Fwww.yellowpages.com%2Fdallas-tx%2Fmip%2Fdallas-womens-foundation-13224281%3Flid%3D13224281"
"http://www.locationary.com/access/proxy.jsp?ACTION_TOKEN=proxy_jsp$JspView$SaveAction&callback=callbackPAgvDXBbZuLXbAHw&inPlaceID=1014875244&xxx_c_1_f_987=http%3A%2F%2Fwww.yellowpages.com%2Fmorrill-me%2Fmip%2Fshear-talent-country-style-14741614%3Flid%3D14741614"
"http://www.locationary.com/access/proxy.jsp?ACTION_TOKEN=proxy_jsp$JspView$SaveAction&callback=callback5GgVkaOind0ySooX&inPlaceID=1015406723&xxx_c_1_f_987=http%3A%2F%2Fwww.yellowpages.com%2Fgalesburg-mi%2Fmip%2Fmichigan-grower-products-8776287%3Flid%3D8776287"
As you can see, they all start out with the same thing:
"http://www.locationary.com/access/proxy.jsp?ACTION_TOKEN=proxy_jsp$JspView$SaveAction&callback=callback"
But after that, there is always a set of 16 seemingly random characters. I understand the rest of this "url" but these 16 characters don't make sense to me. Is there any way to generate them or get them before the request is sent?
Thanks!
These are almost certainly "AJAX" requests, being used as JSONP. The callback... value is the name of a dynamically-created JavaScript function to handle the results of the data returned by the HTTP request.
I'd recommend using Firebug to view all of this - it may help shed a little more light on things.
For some reason, non IE browsers seem to persist a URL hash (if present) when a server-side redirect is sent (using the Location header). Example:
// a simple redirect using Response.Redirect("http://www.yahoo.com");
Text.aspx
If I visit:
Test.aspx#foo
In Firefox/Chrome, I'm taken to:
http://www.yahoo.com#foo
Can anyone explain why this happens? I've tried this with various server side redirects in different platforms as well (all resulting in the Location header, though) and this always seems to happen. I don't see it anywhere in the HTTP spec, but it really seems to be a problem with the browsers themselves. The URL hash (as expected) is never sent to the server, so the server redirect isn't polluted by it, the browsers are just persisting it for some reason.
Any ideas?
I suggest that this is the correct behaviour. The 302 and 307 status codes indicate that the resource is to be found elsewhere. #bookmark is a location within the resource.
Once the resource (html document) has been located it is for the browser to locate the #bookmark within the document.
The analogy is this: You want to look something up in a book in chapter 57, so you go to the library to get the book. But there is a note on the shelf saying the book has moved, it is now in the other building. So you go to the new location. You still want chapter 57 - it is irrelevant where you got the book.
This is an aspect that was not covered by previous HTTP specifications but has been addressed in the later HTTP development:
If the server returns a response code of 300 ("multiple choice"), 301
("moved permanently"), 302 ("moved temporarily") or 303 ("see
other"), and if the server also returns one or more URIs where the
resource can be found, then the client SHOULD treat the new URIs as
if the fragment identifier of the original URI was added at the end.
The exception is when a returned URI already has a fragment
identifier. In that case the original fragment identifier MUST NOT be
not added to it.
So the fragment of the original URI should also be used for the redirection URI unless it also contains a fragment.
Although this was just a draft that expired in 2000, it seems that the behavior as described above is the de-facto standard behavior among todays web browsers.
#Julian Reschke or #Mark Nottingham probably know more/better about this.
From what I have found, it doesn't seem clear what the exact behaviour should be. There are plently of people having problems with this, some of them wants to keep the bookmark through the redirect, some of them wants to get rid of it.
Different browsers handle this differently, so in practice it's not useful to rely on either behaviour.
It definitely is a browser issue. The browser never sends the bookmark part of the URL to the server, so there is nothing that the server could do to find out if there is a bookmark or not, and nothing that could be done about it reliably.
When I put the full URL in the action attribute of the form, it will keep the hash. But when I just do the query string then it drops the hash. E.g.,
Keeps the hash:
https://example.com/edit#alrighty
<form action="https://example.com/edit?ok=yes">
Drops the hash:
https://example.com/edit
<form action="?ok=yes">
I am new to web programming and just curious to know about the GET and POST methods of sending data from one page to another.
It is said that the GET method is faster than POST but I don't know why.
One reason I could find is that GET can take only 255 characters?
Is there any other reason? Please someone explain to me.
It's not much about speed. There are plenty of cases where POST is more applicable. For example, search engines will index GET URLs and browsers can bookmark them and make them show up in history. As a result, if you take actions like modifying a DB based on a GET request, it might be harmful as some bots might also traverse the URL.
The other case can be security issue. If you post credentials using GET, it'll get listed in browser history and server log files.
There are several misconceptions about GET and POST in HTTP. There is one primary difference, GET must be idempotent while POST does not have to be. What this means is that GETs cause no side effects, i.e I can send a GET to a web application as many times as I want to (think hitting Ctrl+R or F5 many times) and the requests will be 'safe'
I cannot do that with POST, a POST may change data on the server. For example, if I order an item on the web the item should be added with a POST because state is changed on the server, the number of items I've added has increased by 1. If I did this with a POST and hit refresh in the browser the browser warns me, if I do it with a GET the browser will simply send the request.
On the server GET vs POST is pure convention, i.e. it's up to me as a developer to ensure that I code the POST on the server to not repeat the call. There are various ways of doing this but that's another question.
To actually answer the question if I use GET or POST to perform the same task there is no performance difference.
You can read the RFC (http://www.w3.org/Protocols/rfc2616/rfc2616.html) for more details.
Looking at the http protocol, POST or GET should be equally easy and fast to parse. I would argue, there is no performance difference.
Take a look at the raw HTTP headers
http GET
GET /index.html?userid=joe&password=guessme HTTP/1.1
Host: www.mysite.com
User-Agent: Mozilla/4.0
http POST
POST /login.jsp HTTP/1.1
Host: www.mysite.com
User-Agent: Mozilla/4.0
Content-Length: 27
Content-Type: application/x-www-form-urlencoded
userid=joe&password=guessme
From my point of view, performance should not be considered when comparing GET and POST.
You should think of GET as "a place to go", and POST as "doing something". For example, a search form should be submitted using GET because the search result page is a "place" and the user will want to bookmark it or retrieve it from their history at a later date. If you submit the form using POST the user can only recreate the page by submitting the form again. On the other hand, if you were to perform an action such as clicking a delete button, you would not want to submit this with GET, as the action would be repeated whenever the user returned to the URL.
Just my few cents from 2016.
I am creating a simple message system. At first I used POST to receive new alerts. In jQuery I had:
$.post('/a/alerts', 'stamp=' + STAMP, function(result)
{
});
And in PHP I used $_POST['stamp']. Even from localhost I got 90-100 ms for every request like this.
I simply changed:
$.get('/a/alerts?stamp=' + STAMP, function(result)
{
});
and in PHP switched to $_GET['stamp']. So a little less than 1 minute of changes. Now every request takes 30-40 ms.
So GET can be twice as fast as POST. Of course not always but for small amounts of data I get same results all the time.
GET is slightly faster because the values are sent in the header unlike the POST the values are sent in the request body, in the format that the content type specifies.
Usually the content type is application/x-www-form-urlencoded, so the request body uses the same format as the query string:
parameter=value&also=another
When you use a file upload in the form, you use the multipart/form-data encoding instead, which has a different format. It's more complicated.
I agree with other answers, but it was not mentioned that GET requests can be cached while POST requests are never cached. I think this is the main reason for some GET request being performed faster.
(Of-coarse this means that sometimes no request is actually sent. Hence it's not actually the GET request which is faster, but your browser's cache.)
HTTP Methods: GET vs. POST: http://www.w3schools.com/tags/ref_httpmethods.asp
POST will grow your headers more, just making it larger, but the difference ought to be negligible really, so I don't see why this should be a concern.
Just bear in mind that the proper way to speak HTTP is to use GET only for actions and POST for data. You don't have to, but you also don't want to have a case where Google bots can, for example, insert, delete or manipulate data that was only meant for a human to handle simply because it is following the links it finds.