If modified since - HTTP protocol - http

If my browser uses cache (local cache), does it GUARANTEE that each HTTP request it sends contains "IF MODIFIED SINCE" header line?
If not, how do I define that it will ? and what if I define a proxy server to the browser ? will it add it automatically then?
thanks in advance

I was just working on this with my RESTful web service and ran a few tests for a particular resource. First of all I was trying to control the browser cache from my web server by setting the following HTTP headers on the HTTP response for the resource:
Cache-Control: must-revalidate, max-age=30
Last-Modified: Mon May 19 11:21:05 GMT 2014
Expires: Mon May 19 11:51:05 GMT 2014
Then from my web UI I have a timer that periodically (every 5 seconds) does a GET on the resource that I've said is cacheable. Since the resource in the browser cache has not yet expired the GET request for the resource is served from the browser cache, however, once the "max-age" has expired the next GET request goes to the server and the browser adds the "If-Modified-Since" header with the "Last-Modified" date as the value like this:
[GET] - /cms_cm_web/api/notification
referer: http://localhost:8080/cms_ui/#/
accept: application/json, text/plain, */*
accept-language: en-us
ua-cpu: AMD64
accept-encoding: gzip, deflate
user-agent: Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Win64; x64; Trident/5.0)
host: localhost:8080
if-modified-since: Mon, May 19 11:21:05 GMT 2014
connection: Keep-Alive
This came from IE9 browser. I get the same from latest Firefox and Chrome browsers as well.
From here the server can look for the "If-Modified-Since" header and if it determines the resource has not been modified then it returns a 304 Not Modified response, otherwise, it returns the resource representation with a 200 OK response.
so according to the HTTP specification you can control caching using "Expires" and/or "Cache-Control" headers together with a "Last-Modified" header. This will cause the browser cache to perform what's called a "conditional GET" request as it includes the "If-Modified-Since" header.

Related

Trying to understand how to respond to CORS OPTIONS request with 403 and when

I really want some validation after reading many websites on CORS to see if I have this
OPTIONS /frog/LOTS/upload/php.php HTTP/1.1
Host: staff.curriculum.local
User-Agent: Mozilla/5.0 (Windows NT 6.1; rv:14.0) Gecko/20100101 Firefox/14.0.1
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-us,en;q=0.5
Accept-Encoding: gzip, deflate
Connection: keep-alive
Origin: http://frogserver.curriculum.local
Access-Control-Request-Method: POST
Access-Control-Request-Headers: cache-control,x-requested-with
Pragma: no-cache
Cache-Control: no-cache
Here is when I think I respond with 403 ->
If my origin set on the server is not * and not that domain, I response with 403.
If I do not support POST (I may support GET), I respond with 403
If I do not support any ONE of the request headers, I respond with 403
For #1, if the domain is not supported, I will NOT send any Access Control headers in the response. I think that is ok.
for #2 and #3, I would send these headers assuming Origin request header was a match
Access-Control-Allow-Origin: http://frogserver.curriculum.local
Access-Control-Allow-Credentials: {exists and is true IF supported on server}
Access-Control-Allow-Headers: {all request headers we support and not related to incoming 'Access-Control-Request-Headers' in the request}
Access-Control-Allow-Methods: {all methods supported NOT related to incoming method POST that came in?}
Access-Control-Expose-Headers: {all headers we allow browser origin website to read}
Is my assumption correct here? or are some of the response headers related to the request headers in some way I am not seeing? (The names are similar but I don't think the behavior is tied to each other).
I would find it really odd that the request even needs to contain Access-Control-Request-Method & Access-Control-Request-Headers if we did not send back a 403 in the cases where we don't support all requested information on that endpoint, right? This is why I 'suspect' we are supposed to return a 403 along with what we do support?
thanks,
Dean

http accept and content-type headers confusion

This is an example of HTTP request message transmitted to the web server. Inside headers there is an Accept header. I am confused about the meaning of it and how it is created. I thought it solely specifies my browsers capabilities to handle files. But that doesn't explain why does it differ when I visit amazon.com or joes-hardware.
There is also Content-Type header, which is a MIME for a file it requested. Same question. How does my browser know what is the type of file it requested? Is it based on the URI extension I requested or is this a generic header? This header seems to only be send in response headers. My mistake.
GET /tools.html HTTP/1.0
User-agent: Mozilla/4.75 [en] (Win98; U) Host: www.joes-hardware.com
Accept: text/html, image/gif, image/jpeg
Accept-language: en
First things first: Acceptand Accept-Language are headers defined in RFC 7231, section 5.3.2 and section 5.3.5, respectively. Together with Accept-* headers, they enable content negotiation through the client. There is an excellent article regarding content engotiation on the Mozilla Development Network. (On a side-note: The MDN is an excellent starting point for research. A lot of the articles are outdated, but the concepts are still largely valid)
The content of the Accept-Language is largely controlled by the language settings of a client's UI. Mozilla's Firefox (and - IIRC - Opera and Safari) allows to tweak these through its settings while MSIE seems to deduct them from the keyboard layouts installed in the system. There is nothing in the type of requested media that should influence this header.
The content of the Accept header on the other hand is very much depending on the context in which a resource is being requested. E.g. if you request a resource through your browser's address bar, the Accept header will pretty much read like "give me anything I can digest." If the browser is requesting a resource through an <img/>-tag, the header is going to differ in that the browser is trying to get a presentation of the requested resource that is fit for being displayed inside that tag. Same for <video/>, <audio/>, and <script/>.
Beyond that, I am not aware of any mechanisms effecting the Accept header. <a/>-tags have - unknownst to most - a type attribute which is carrying a MIME mediatype. This is, however, a fallback mechanism and should not alter Accept in any way.
As for your example, I took the liberty of requesting both sites and copying the relevant request headers:
amazon.com
GET / HTTP/1.1
Host: www.amazon.com
User-Agent: Mozilla/5.0 (Windows NT 6.2; WOW64; rv:45.0) Gecko/20100101 Firefox/45.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: de,en-US;q=0.7,en;q=0.3
Accept-Encoding: gzip, deflate
DNT: 1
Connection: keep-alive
Pragma: no-cache
Cache-Control: no-cache
joes-hardware.com
GET / HTTP/1.1
Host: www.joes-hardware.com
User-Agent: Mozilla/5.0 (Windows NT 6.2; WOW64; rv:45.0) Gecko/20100101 Firefox/45.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: de,en-US;q=0.7,en;q=0.3
Accept-Encoding: gzip, deflate
DNT: 1
Connection: keep-alive
Pragma: no-cache
Cache-Control: no-cache
The headers are no different when requesting /tools.html in the last example.

Multiple Protocols - Correct Implementation

I was inspecting some of the HTTP exchanges between my browser and Google and it triggered this question.
In short, my browser (Firefox 36.0.4) is making HTTP/1.1 requests and Google is responding with HTTP/2.0; there is no attempt to respond in the requested protocol. I am aware that much of the HTTP/2.0 spec has already been implemented in a haphazard way through SPDY, but this seems like a poor neogitation with the client.
I thought that the purpose of declaring protocols in the header was that a server would be able to determine how it should respond to the client, which is in one of three ways:
1. the client has requested the server's preferred protocol, so the server continues with the request as normal
2. The client has requested another protocol version that the server supports, the server responds in the request protocol but includes an upgrade header indicating its preferred protocol. The client MAY request an upgrade at which point the server will send a 101 Switching Protocols response and switch to the preferred protocol.
3. The client has requested an unsupported or outdated protocol, the server sends a 426 Upgrade Required response with supported protocols (in descending order of preference) in the upgrade header; the client must repeat the request with a supported protocol.
4. The client reuested a major protocol version that is wholly unsupported; e.g. HTTP/2.x while the server only supports HTTP/1.x. The server responds with 505 HTTP Version Not Supported
The exchange with Google is not doing this; is this poor practice or am I missing something?
An example, selected at random:
https://plus.google.com/u/0/_/notifications/frame?querystring=blahblahblah
GET /u/0/_/notifications/frame?querystring=blahblahblah HTTP/1.1
Host: plus.google.com
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:36.0) Gecko/20100101 Firefox/36.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-GB,en;q=0.5
Accept-Encoding: gzip, deflate
Referer: https://www.google.co.uk/?gfe_rd=cr&ei=Lc8bVcXFOKbj8we_uIKYDg&gws_rd=ssl
Cookie: NID=67=iZxcMVTvg-6PsQIUpZ5tSPL-7-uJdls3vdci3afLmoLCpD5JOq0NfzhTnnpcCW9ymbXsn3GRGxfSgYlXGEk9XmnbUne0LCPrUc_ahhpc5wV6n-GZ8F7s-JS-JWgZWEwri-GaWXK1vgyRw7jMbqEiAUSRCzs1Fr1K6ZUIH0EpJdlwZD-K26MJNazpyHL_vZ5k4m8NrtFDkAoYPw; OTZ=2759671_52_56_123900_52_436380; SID=DQAAAP0AAAAqKgGz5aFNESd464Z_jUsmTi7JQfEKsuWkGZVJe8QvdbOPTZpL5ZNjKSsSSg9QvJglP-aMNLrgn2b7MsDF_4Z7Ebe1X347Cd3-j3ktLedgmq9nRO92hxEseqf974VNumrst-XqMj9Oq_xf-KDz-CDEJ1XiqWZYVHurV-IrXib5ei7x9dqlLF2NSPYLaCxlrwKdjCQX-FDDB03FWEuE7dIMYs3BQ-_NU5fG9os6I6r6ABy9mkiy84rraZFVthd38VJF5z2WYmgQ55QJPr9EDpSA5VKH1tbW6XyLjZLt5EEEj1xoqRF4EguRkIOiG8IiqRs49GnwqQSCpTw3ROW-jNDI; HSID=A7u8vyQI-v7jJSEbS; SSID=AOojY4hDLYgnSjUrK; APISID=z23KH1a0VsBukvMu/ARaOeOni08HfbGg6R; SAPISID=5iTgyxKDRPP7fNtF/AdiFbKNYN04h7n6cu; PREF=ID=cc54787f58f50d42:U=8e10581450dbe3b5:FF=0:LD=en:TM=1416091562:LM=1418086819:GM=1:S=0KVfl2hqkG8Psvwv; OGP=-5061451:-5061492:; OGPC=4061155-1:
Connection: keep-alive
HTTP/2.0 200 OK
Alternate-Protocol: 443:quic,p=0.5
Cache-Control: private, max-age=0
content-security-policy-report-only: script-src 'unsafe-inline' 'unsafe-eval' 'self' https://*.googleapis.com https://*.gstatic.com https://apis.google.com https://www.google-analytics.com https://www.googletagmanager.com https://*.talkgadget.google.com https://pagead2.googleadservices.com https://pagead2.googlesyndication.com https://tpc.googlesyndication.com https://s.ytimg.com https://www.youtube.com https://clients1.google.com https://www.google.com;report-uri /_/cspreport/es_oz_20150330.18_p0
Content-Type: text/html; charset=utf-8
Date: Wed, 01 Apr 2015 10:57:55 GMT
Expires: Wed, 01 Apr 2015 10:57:55 GMT
Server: GSE
x-content-type-options: nosniff
x-ua-compatible: IE=edge, chrome=1
X-XSS-Protection: 1; mode=block
X-Firefox-Spdy: h2-15
This is a https request. The client announced the support for HTTP/2.0 with the ALPN (formerly NPN) extension in the SSL handshake. Therefore the server knows that the client can do HTTP/2.0. If this extension is not given the server is not allowed to reply with a higher major HTTP version compared to the client request.
The HTTP version in the response is an advertisement of the capabilities of the server, not the actual protocol version of the response.
The protocol version of the response is the one that has been sent with the request.
In the past (but perhaps even nowadays) it was common for an old client to send a HTTP/1.0 request, and have the server respond in this way:
GET / HTTP/1.0
User-Agent: Netscape/1.0
HTTP/1.1 200 OK
Content-Length: 0
<connection closed>
The server advertised that it was able to speak HTTP/1.1, but behaved as HTTP/1.0 in the response (by closing the connection).
The same is happening in your case: you make a HTTP/1.1 request, the server advertises that it can speak HTTP/2.0 and responds with the HTTP/1.1 response format.
A smart client receiving that response could start speaking HTTP/2.0 to that server.

HTTP request, strange socket behaviour

I experience strange behavior when doing HTTP requests through sockets, here the request:
POST https://example.com:443/service/XMLSelect HTTP/1.1
Content-Length: 10926
Host: example.com
User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; .NET CLR 1.1.4322; .NET CLR 1.0.3705)
Authorization: Basic XXX
SOAPAction: http://example.com/SubmitXml
Later on there goes body of my request with given content length.
After that I receive something like:
HTTP/1.1 200 OK
Server: Apache-Coyote/1.1
Content-Type: text/xml;charset=utf-8
Transfer-Encoding: chunked
Date: Tue, 30 Mar 2010 06:13:52 GMT
So everything seem to be fine here. I read all contents from network stream and successfully receive response. But my socket which I'm doing polling on switches it's modes like that:
write ( i write headers and request here )
read ( after headers sent i begin to receive response )
write ( STRANGE BEHAVIOUR HERE. WHY? here i send nothing really )
read ( here it switches to read back again )
last two steps can repeat several times. So I want to ask what leads for socket's mode change? And in this case it's not a big problem, but when I use gzip compression in my request ( no idea how it's related ) and ask server to send gzipped response to me like this:
POST https://example.com:443/service/XMLSelect HTTP/1.1
Content-Length: 1076
Accept-Encoding: gzip
Content-Encoding: gzip
Host: example.com
User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; .NET CLR 1.1.4322; .NET CLR 1.0.3705)
Authorization: Basic XXX
SOAPAction: http://example.com/SubmitXml
I receive response like that:
HTTP/1.1 200 OK
Server: Apache-Coyote/1.1
Content-Encoding: gzip
Content-Type: text/xml;charset=utf-8
Transfer-Encoding: chunked
Date: Tue, 30 Mar 2010 07:26:33 GMT
2000
�
I receive a chunk size and GZIP header, it's all okay. And here's what is happening with my poor little socket meanwhile:
write ( i write headers and request here )
read ( after headers sent i begin to receive response )
write ( STRANGE BEHAVIOUR HERE. And it finally sits here forever waiting for me to send something! But if i refer to HTTP I don't have to send anything more! )
What can it be related to? What it wants me to send? Is it remote web server's problem or do I miss something?
PS All actual service references and login/passwords replaced with fake ones :)
A socket becomes writable whenever there's space in the socket send buffer. The OS can't really know if your application has more data to send, but knows about its internal structures, like socket buffers. You have to explicitly add/remove the socket to the write fd_set for select(2) (enable/disable EPOLLOUT event for epoll(4)). This is usually handled with a state machine, like in libevent. Also polling works best with non-blocking sockets.
Hope this helps.

firefox, jQuery ajax calls firing twice and never triggering success or error functions

I am developing with the .NET framework, using jQuery 1.4.2 client side.
When developing in Firefox version 3.6, every so often an one of the many ajax calls I make on the page will fire twice, the second will return successfully but will not trigger the success handler of the ajax call and the first never returns anything. So basically the data is all sent to the server and response is sent down but nothing happens with the response.
Here is an example of the call I am making. It happens to any of the ajax calls, so there is not one particular that is causing the problem:
$.ajax({
type:"POST",
contentType : "application/json; charset=utf-8",
data:"{}",
dataType:"json",
success:function(){
alert('success');
},
error:function(){
alert('error');
},
url:'/services.aspx/somemethod'
});
})
From firebug, here are the headers of the first call which in firebug shows as never completely responding, meaning i see no response code and the loader gif in the firebug never goes away.
Note:In firebug it usually says Response Header but for the first call this space is blank
Server ASP.NET Development Server/9.0.0.0
X-AspNet-Version 2.0.50727
Content-Type application/json; charset=utf-8
Connection Close
Request Headers
Host mydomain.com
User-Agent Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.2.3)
Gecko/20100401Firefox/3.6.3 ( .NET CLR 3.5.30729)
Accept application/json, text/javascript, */*
Accept-Language en-us,en;q=0.5
Accept-Encoding gzip,deflate
Accept-Charset ISO-8859-1,utf-8;q=0.7,*;q=0.7
Keep-Alive 115
Connection keep-alive
Content-Type application/json; charset=utf-8
X-Requested-With XMLHttpRequest
Referer http://mydomain.com/mypage.aspx
Here is the header from the second request which just appear to complete in firebug (i.e response is 200):
Response Header
Server ASP.NET Development Server/9.0.0.0
X-AspNet-Version 2.0.50727
Content-Type application/json; charset=utf-8
Connection Close
Request Headers
Host mydomain.com
User-Agent Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.2.3)
Gecko/20100401 Firefox/3.6.3 ( .NET CLR 3.5.30729)
Accept text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language en-us,en;q=0.5
Accept-Encoding gzip,deflate
Accept-Charset ISO-8859-1,utf-8;q=0.7,*;q=0.7
Keep-Alive 115
Connection keep-alive
Content-Type application/json; charset=utf-8
Referer http://mydomain.com/mypage.aspx
To summarize my question, why are two requests being made and why are neither of them triggering a success or error handler in the ajax call.
I have seen this article about firefox 3.5+ and preflighted requests
https://developer.mozilla.org/En/HTTP_access_control#Preflighted_requests
In the article is says if a "POST" is made with any other content type than
"application/x-www-form-urlencoded, multipart/form-data, or text/plain" than the request is pre-flighted. If this is the case, this should happen to all of my calls.
Thanks
This isn't an answer as much as a proposed temporary workaround. Make the call synchronous with async:false and see if things work again.
I've been tearing my hair out over a similar-sounding bug recently.

Resources