If a resource (index.html) is already cached in the client, for example using response header:
"Cache-Control": "max-age=0, must-revalidate, proxy-revalidate"
How can I prevent tomcat to respond with a 304 Not Modifiedin the next request to the server? I would like to force the server to respond with 200 instead of 304 no matter what.
I tried to set
httpResp.setHeader("Cache-Control", "no-cache, no-store, must-revalidate, proxy-revalidate");
httpResp.setHeader("Pragma", "no-cache");
httpResp.setHeader("Expires", "0");
but it only works in the SECOND request. The first request still gets 304.
I tried to override the if-modified-since header using HttpServletRequestWrapper with values in the past such as Mon, 06 Dec 2010 01:34:46 GMT with no luck - client still gets 304 responses although the file was modified in 2015.
Is there any way I can prevent 304 responses? maybe via tomcat configuration?
Not sure if this will help but you could try the following -
Delete the browser cache to start from scratch and test whether this works or not
Adapted from another ServerFault question -
https://serverfault.com/questions/40205/how-do-i-disable-tomcat-caching-im-having-weird-static-file-problems?answertab=votes#tab-top
You might have to delete the application cache folder in /work/Catalina/localhost after changing the cachingAllowed flag.
Configuration can be introduced in server.xml as
<Context className="org.apache.catalina.core.StandardContext"
cachingAllowed="false"
charsetMapperClass="org.apache.catalina.util.CharsetMapper"
cookies="true"
reloadable="false"
wrapperClass="org.apache.catalina.core.StandardWrapper">
</Context>
Related
I'm trying to mirror the whole website at http://opposedforces.com/parts/impreza/en_g11/type_63/
Accessing through a browser (Firefox, w3m) or Postman work fine, and return the html file.
Accessing through wget, cURL, the Python requests module and HTTrack all fail.
wget specifically fails with:
↪ wget --mirror -p --convert-links "http://opposedforces.com/parts/impreza/en_g11/type_63/"
--2021-02-03 20:48:29-- http://opposedforces.com/parts/impreza/en_g11/type_63/
Resolving opposedforces.com (opposedforces.com)... 138.201.30.59Connecting to opposedforces.com (opposedforces.com)|138.201.30.59|:80... connected.
HTTP request sent, awaiting response... 0
2021-02-03 20:48:29 ERROR 0: (no description).
Converted links in 0 files in 0 seconds.
It seemingly returns no information. Originally I thought some JavaScript was generating the html, but I can't find any JS using Firefox developer tools, and I would assume Postman would not work in this case.
Any ideas how to get around this? Ideally I can use wget to download this and all sub-pages, but alternative solutions are also welcome.
This is one of those times when the website is completely and absolutely broken.
It is unfortunate that web browsers go to great lengths to support such broken web pages.
The problem is that the server sends a broken response. This is the response I see:
---response begin---
HTTP/1.1 000
Cache-Control: no-cache
Pragma: no-cache
Content-Length: 44892
Expires: -1
Server: Microsoft-IIS/7.5
X-AspNet-Version: 2.0.50727
Set-Cookie: ASP.NET_SessionId=gxhoir45jpd43545iujdpiru; path=/; HttpOnly
X-Powered-By: ASP.NET
Date: Fri, 05 Feb 2021 09:26:26 GMT
See? It returns a HTTP/1.1 000 response, which doesn't exist in the spec. Web browsers seem to just accept it as a 200 response and move on. Wget doesn't.
But you can get around it by using the --content-on-error option which is ask Wget to download the content irrespective of the response code
I have a little Web API, almost directly from the standard VS project template, i.e. Home and Values controllers, and lots of MVC cruft. It is set to run and debug under IIS 10.
I have set up tracing by adding the package Microsoft.AspNet.WebApi.Tracing and the following code in WebApiConfig:
public static void Register(HttpConfiguration config)
{
config.EnableSystemDiagnosticsTracing();
SystemDiagnosticsTraceWriter traceWriter = config.EnableSystemDiagnosticsTracing();
traceWriter.IsVerbose = true;
traceWriter.MinimumLevel = TraceLevel.Debug;
config.Services.Replace(typeof(ITraceWriter), new SimpleTracer());
...
...
}
SimpleTracer is an ITraceWriter that writes to a text file.
When I call the API from outside the VS ecosystem, i.e. from PostMan in Chrome, a bad url, that results in a 404 error message, and the creation of a new trace file if there's not already one. Of I call it from PostMan with a good url, I get the expected result, and a trace of the request in the trace file.
When I call it from my Console app, even with a good url, I still get a 404 error response, and nothing is written to the trace file. I made sure by removing it and IIS doesn't even re-create it when using the .exe client.
If I call it from the compiled .exe from outside VS, I get the same error.
Then, when I set the Web API to use IIS Express, everything works perfectly. Do I need CORS for calls from non-web apps, does IIS need an extra header in this case? What is wrong?
EDIT A: This is the request when I use PostMan, and it returns a 200 and the expected list of strings.
GET /DemoApi/api/values HTTP/1.1
Host: localhost
Content-Type: application/json
Cache-Control: no-cache
Postman-Token: f9454ffc-6a8d-e1ed-1a28-23ed8166e534
and the response and headers:
["value1","value2","value3","value4","value5","value6","value7"]
Cache-Control →no-cache
Content-Length →64
Content-Type →application/json; charset=utf-8
Date →Tue, 13 Dec 2016 06:20:07 GMT
Expires →-1
Pragma →no-cache
Server →Microsoft-IIS/10.0
X-AspNet-Version →4.0.30319
X-Powered-By →ASP.NET
EDIT B: This is the request sent using HttpClient:
GET http://abbeyofthelema/api/values HTTP/1.1
Accept: application/json
Host: abbeyofthelema
Connection: Keep-Alive
The only real difference is that because Fiddler doesn't capture traffic from localhost, I had to use my computer name instead. The same recipient still gets the request.
The response here is:
HTTP/1.1 404 Not Found
Cache-Control: private
Content-Type: text/html; charset=utf-8
Server: Microsoft-IIS/10.0
X-Powered-By: ASP.NET
Date: Tue, 13 Dec 2016 08:07:25 GMT
Content-Length: 4959
According to Timothy Shields at the following link
Why is HttpClient BaseAddress not working?
You must place a slash at the end of the BaseAddress, and you must not place a slash at the beginning of your relative URI
Should non-2XX status code responses still include CORS specific headers such as Access-Control-Allow-Origin, Access-Control-Allow-Methods, and Access-Control-Max-Age? Does that even make any sense for clients?
For example:
➜ api git:(master) ✗ curl -i http://127.0.0.1:9000/dfas
HTTP/1.1 404 Not Found
Connection: close
Server: Node.js v6.3.1
Cache-Control: no-cache, no-store
Access-Control-Max-Age: 300
Access-Control-Allow-Origin: *
Content-Type: application/json
Content-Length: 60
Date: Thu, 11 Aug 2016 22:58:33 GMT
{"code":"ResourceNotFound","message":"/dfas does not exist"}
Yes it makes sense to have the server send CORS headers even with non-2xx responses. The reason is: without the CORS headers in the response, non-2xx response codes aren’t exposed to frontend code (through Fetch or XHR). The response codes may show up in the devtools console but without the CORS headers the only thing the frontend code will be able to determine programmatically is that an error occurred—but not the response code for the error.
So if you want frontend code to have the ability to do useful error handling based on the response code, the server should send CORS headers even in non-2xx responses.
Context
My github pages are not refreshing. After diagnosing my conclusion is it's a server side caching effect.
What I did + diagnostic results
The site is working OK.
I made a change in index.html in my local
repo, then commit and push
I completely cleared my browser cache (btw also using cache clear plugins, and Chrome dev tools set not using cache)
Reloaded the page, with ctrl+f5 and ctrl+R (change is not applied)
Checked using github.com read index.html, the change is there, committed.
Monitored the traffic with Fiddler. The request for index.html sent, full response received, the content is the old NOT changed.
Examined the response header with Fiddler, says: (see header exhibit)
Reverse diagnostic
I've issued a request with a usual trick typeing: index.html?v001orAnythingYouWant and I got the new version of the page
Problem
Problem solved one can say, but it is not true. When I refresh images, css, js still this effect will prevent me to see the new result.
Question
How can I configure or overcome this server side caching, of course only for development/testing time?
Response header exhibit
HTTP/1.1 200 OK
Server: GitHub.com
Content-Type: text/html; charset=utf-8
Last-Modified: Fri, 06 May 2016 12:24:29 GMT
Access-Control-Allow-Origin: *
Expires: Fri, 06 May 2016 12:45:44 GMT
Cache-Control: max-age=600
X-GitHub-Request-Id: B91F111E:5AA6:47804:572C8F9F
Content-Length: 43752
Accept-Ranges: bytes
Date: Fri, 06 May 2016 12:35:57 GMT
Via: 1.1 varnish
Age: 13
Connection: keep-alive
X-Served-By: cache-fra1238-FRA
X-Cache: HIT
X-Cache-Hits: 1
Vary: Accept-Encoding
X-Fastly-Request-ID: 1758f53052edbfb40a0044407d53d5654ad1e983
I am using the LinkedIn Owin Middleare and started running into issues this morning and have now reproduced it to the below error:
POST https://www.linkedin.com/uas/oauth2/accessToken HTTP/1.1
Content-Type: application/x-www-form-urlencoded
Host: www.linkedin.com
Cookie: bscookie="v=1&201504071234373bc02b47-9d08-477f-8375-b80b281ef416AQEptFjv8jXPI93YmF-H-3kvnwSLwBF8"; bcookie="v=2&46f6f299-6702-48bf-8634-7ba023bd5099"; lidc="b=LB23:g=218:u=215:i=1428412320:t=1428487523:s=AQEQQq6vlEKPT3LW8c0cPEzRTKp-ToxL"
Content-Length: 267
Expect: 100-continue
Connection: Keep-Alive
grant_type=authorization_code&code=AQQRSgEH8vczSFJKNxtMpunzjYN6YJxoF2hiX_d9RVkqBvMC7TzRpur0p9NJFdQOUNf8RmFyj_cCg3ENTucRw5e-gQfEZ5sPGoujiFRsQ8Tb0pLnaog&redirect_uri=http%3A%2F%2Flocalhost%3A1729%2Fsignin-linkedin&client_id=&client_secret=
Results in method not found.
HTTP/1.1 405 Method Not Allowed
Date: Tue, 07 Apr 2015 13:13:16 GMT
Content-Type: text/html
Content-Language: en
Content-Length: 5487
X-Li-Fabric: PROD-ELA4
Strict-Transport-Security: max-age=0
Set-Cookie: lidc="b=LB23:g=218:u=215:i=1428412396:t=1428487523:s=AQExeP2uX-7KXQv79NIZmW0LB09uE4eJ"; Expires=Wed, 08 Apr 2015 10:05:23 GMT; domain=.linkedin.com; Path=/
Pragma: no-cache
Expires: Thu, 01 Jan 1970 00:00:00 GMT
Cache-Control: no-cache, no-store
Connection: keep-alive
X-Li-Pop: PROD-IDB2
X-LI-UUID: 0FM/jIG90hPAzyhAqCsAAA==
Looking for anyone to confirm that there was a change on linkedin causing this error and that its not application specific.
Note that i removed teh above clientid/secrets.
I also spent most of the morning off and on trying to get this to work. Frustratingly it worked fine using Advanced Rest Client chrome tool. A combination of this and fiddler showed the only difference in the header was that Expect: 100-continue flag in the header. The only way I was able to get it to be set to false was in the web.config section
<system.net>
<settings>
<servicePointManager expect100Continue="false" />
</settings>
</system.net>
Hope this helps.
I ran into this issue this morning too (I'm using DotNetOpenAuth). It looks like this is related to the use of the following request header: Expect: 100-continue
After removing this request header, the HTTP/1.1 405 Method Not Allowed response no longer occurs. Obviously this isn't much help if you don't have access to the source code!
I'm assuming this is due to a change in LinkedIn as I only started experiencing problems this morning. I'm guessing they'll need to look into a fix for this.
I started having this issue today. After some research about Expect: 100-continue I found that putting
System.Net.ServicePointManager.Expect100Continue = false;
in my Application_Start() function inside of Global.asax, takes out the 100-continue from the request and my login with LinkedIn is now working again.
Not a permanent fix as I would like to now why it broke in the first place.
I had same issue also use DotNetOpenAuth.
How I fix:
I remove from request header "Expect: 100-continue"
in my case redirect_uri was encoded and I remove encode for redirect_uri (for request to https://www.linkedin.com/uas/oauth2/accessToken )
For those using Owin Middleware and Owin.Security.Providers
A pre-release nuget was created with a fix.
https://www.nuget.org/packages/Owin.Security.Providers/1.17.0-pre
This works for now. But until we know what linkedin has changed or comes with statement about what they changed people can use this as a hotfix.
Alittle more background on the fix can be found at :
https://github.com/RockstarLabs/OwinOAuthProviders/issues/87#issuecomment-90838017
But the root cause is that LinkedIn changed something on there accessToken endpoint causing most of the libs using linkedin SSO had to apply a hotfix, but we yet haven't heard anything from linkedin.
Found a solution for curl, pretty simple:
curl_setopt($ch, CURLOPT_HTTPHEADER, array('Expect:') );