Nginix proxy caching - how to check if it is working? - http

I have set up my nginx.conf file to use proxy caching from tutorials I have found online. However, I am trying to figure out how to check if it is actually working. I've read somewhere that adding add_header X-Cache-Status $upstream_cache_status; to the config file in the server section should add a caching header to a response that will show if it was from cache (has values of either a HIT, MISS or EXPIRED). However, I'm wondering WHERE I can actually view this header(and its value) as well as if this is the right way/if there is another way. I'm very new to web in general so sorry if this is a noob question. Thanks!

You have it the right way, to see the headers send back you need to check in your http client. Obviously how to do it, if you can do it, will depend on your client
Here some easy ways to see the headers:
1. curl --head http://your-adress
2. wget --server-response http://your-adress
3. in firefox, install the [liveheaders][1] addon,
go the <url>, rightclick->View page info->headers
4. in opera open dragonfly with ctrl+i
go to network->make request part of the tool,
enter http://your-adress,
the result with headers will be shown in the response field

Related

HTTP to HTTPS issues

I have a question, I am a bit confused, I don't really understand why this is happening.
I have a website which works well over http. When I force redirect to https something happens. Even if I replace all my urls in my code, only GET request will work. Anybody has any idea why is this happening?
I also have admin part of the website. it works to login into the admin but it doesn't work to make any requests on it. I am trying to post or delete but I receive a 401 err, even if I am logged in and set the token right...
So bottom line is:
On Https, the website works, it shows all the resources from the db, I can login in the Admin but I can not post or delete.
On Http everything works.
I am in a huge need of advice or ideas.
thanks.
From my experience you cannot serve mixed content, that's my first suggestion is to call all your scripts/dependencies without the prefix; ie: script src="https://blahblah" to "script src="//blahblah"; you're going to make sure you are sticking consistently to one serving source; so that's the first thing I'd check (also look at console logs, they often give hints as to what failed);
Secondly I am unsure of the response or how the server handles traffic from non https, possibly there's a rule in htaccess or some form of redirection trying to force the call via https so http fails? these are all steps in debugging right you need to troubleshoot and play process of eliminations; first though I'd make sure we are serving everything from // or https; when on http I would look at console logs for clues but even more so I would force a redirect to use https exclusively (as most sites do now)
Check for mixed content issues first though, this is something that can have a multitude of solutions based on the many variations of what could be causing this issue.

nginx Http2 Push fails when Vary: Accept header set

Basically, http2 push using http2_push_preload doesn't work if you set header Vary: Accept on your response because you are doing content negotiation using the Accept request header. I'm using content negotiation to send (http2 push) webp pics instead of jpg to clients that support it.
HTTP/2 Push works for .js, .css files and all in the same call and shows "Push/Other" in Chrome DevTools, but fails for this one unique case (jpg content negotiated to webp), and shows just "Other" (not pushed) in Chrome DevTools.
Content negotiation for brotli, gzip compressions all work fine and get pushed properly using the Vary: Accept-Encoding and same for languages using the Vary: Accept-Language.
Only Vary: Accept fails.
Please help I'm at the point of giving up.
P.S: I was going through nginx source https://github.com/nginx/nginx/blob/master/src/http/v2/ngx_http_v2.c. Do a Crtl+F and you will find cases for only "Accept-Encoding" and "Accept-Language", nothing for "Accept". So I think "Accept" case is not yet supported by nginx??
P.P.S: I'm not overpushing, only using http2 push for the hero image.
Edit: Here's bug ticket on nginx site for those who want to track it:
https://trac.nginx.org/nginx/ticket/1851
https://trac.nginx.org/nginx/ticket/1817
Edit 2: Nginx team has responded by saying they are not going to support it due to security reasons (you can find the response in the duplicate bug post), which I believe is due to pushing from different origins like CDNs? Anyway, I need this feature, so the only option left is to:
Create a custom patch or package.
Use some other server software that supports it.
Manually implement in website code a feature to rewrite .jpg paths to .jpg.webp if requests are coming from clients that support webp.
(I don't give up :P)
I'm not entirely surprised by this and Apache does the same. If you want this to change suggest to raise a bug with nginx but wouldn't be surprised if they didn't prioritise it.
It also seems the browsers don't handle this situation very well either.
HTTP/2 push is fraught with opportunities to over push and this is one example. You should not push if client does not support WebP and you often won't know that with the information that you have at this point. Chrome seems to send webp in the accept header when you ask for the HTML for example, but Firefox does not.
Preload is a much better, safer, option that will respect vary headers and also cache status.

Is there another way to set cookies than through HTTP headers?

I'm writing some http client code to interact with a website, and I need to set some cookies. Simply visiting the website sets 4 cookies (as seen in Chrome Settings).
However, when I look at the HTTP response headers for when those cookies were set (using Live HTTP Headers extension), there is no Set-Cookie header anywhere. How were those cookies set? Is there another way than through Set-Cookie?
Edit: Some of the cookies are HttpOnly.
If you load a site in your browser, it might also load other assets that can also set cookies (given that they are on the same domain).
But there is a second way to set cookies: with Javascript via document.cookies.
As far as I know, if your javascript or python code sets a cookie for that domain, then the response will include the SET-COOKIE field. You can view that from at least the inspect console.
So I see that you're using HTTP live extension, but it doesn't look like it shows that field in the response.
I tried looking for other extensions that could show it, but I wasn't able to find one as far as I know. I suppose we both can always fall back to the chrome inspect console. If you go to the network tab, you should actually see the req-resp.

event espresso CORS is wrong

I was hired to write a wordpress plugin which involves an ajax request to the website's eventespresso api.
I got it working fine locally (calling the live site's api from my local server), but when I activate the plugin on the live site, it throws:
Failed to load http://example.com/wp-json/ee/v4.8.36/events: The
'Access-Control-Allow-Origin' header has a value 'http://opt.local'
that is not equal to the supplied origin. Origin
'http://www.example.com' is therefore not allowed access.
My local domain is "http://opt.local", and the live site is http://example.com.
This error suggests to me that it only wants to allow access from my local setup, and not from the live site, which isn't even cross origin! Maybe I caused it to cache the wrong thing in development?
So a few more tests revealed that the cors settings are correct for everything except the specific route I need.
> curl -I "http://example.com/wp-json"
Access-Control-Allow-Origin: http://example.com
> curl -I "http://example.com/wp-json/ee/v4.8.36"
Access-Control-Allow-Origin: http://example.com
> curl -I "http://example.com/wp-json/ee/v4.8.36/events"
Access-Control-Allow-Origin: http://opt.local
I was able to make it work by using ee/v4.8.35 (a lower api patch version) but hopefully, there is a better solution.
I helped develop the EE4 REST API.
Ya it sounds like some issue where the webserver or a proxy or something is caching the Access-Control-Allow-Origin header.
There's no code in the EE4 REST API that controls that header, that's actually handled by the WP API (on which the EE4 REST API is built).
The relevant code is in wp-includes/rest-api.php in the function rest_send_cors_headers(). That calls get_http_origin(), whose value can be filtered using the filter http_origin.
So you might want to try adding something like
function my_plugin_force_correct_http_origin($http_origin) {
return 'http://example.com';
}
add_filter('http_origin', 'my_plugin_force_correct_http_origin');
that will ensure the PHP code is sending the correct Access-Control-Allow-Origin header.
If that doesn't resolve the issue, I would verify rest_send_cors_headers() is getting called at all (you could temporarily put a line like echo 'called rest_send_cors_headers!';die; inside that function to check).
If it is getting called, and my suggested filter doesn't help, you could try tagging your question with 'wordpress-rest-api'. Also, I would be curious to see if http://example.com/wp-json/ee/v4.8.36/events?limit=50 has the same problem.

Tamper with first line of URL request, in Firefox

I want to change first line of the HTTP header of my request, modifying the method and/or URL.
The (excellent) Tamperdata firefox plugin allows a developer to modify the headers of a request, but not the URL itself. This latter part is what I want to be able to do.
So something like...
GET http://foo.com/?foo=foo HTTP/1.1
... could become ...
GET http://bar.com/?bar=bar HTTP/1.1
For context, I need to tamper with (make correct) an erroneous request from Flash, to see if an error can be corrected by fixing the url.
Any ideas? Sounds like something that may need to be done on a proxy level. In which case, suggestions?
Check out Charles Proxy (multiplatform) and/or Fiddler2 (Windows only) for more client-side solutions - both of these run as a proxy and can modify requests before they get sent out to the server.
If you have access to the webserver and it's running Apache, you can set up some rewrite rules that will modify the URL before it gets processed by the main HTTP engine.
For those coming to this page from a search engine, I would also recommend the Burp Proxy suite: http://www.portswigger.net/burp/proxy.html
Although more specifically targeted towards security testing, it's still an invaluable tool.
If you're trying to intercept the HTTP packets and modify them on the way out, then Tamperdata may be route you want to take.
However, if you want minute control over these things, you'd be much better off simulating the entire browser session using a utility such as curl
Curl: http://curl.haxx.se/

Resources