Symfony 6 HttpCache in dev mode - symfony

I'm running some tests to use the Symfony 6's reverse proxy and cache my pages. To do so, I enabled the http_cache option on the dev environment.
It seems to work, if I try to cache a page with a basic strategy like this:
#[Cache(public: true, maxage: 3600, mustRevalidate: true)]
The cache response headers are corrects and the special Symfony header shows that the page in cache is fresh:
X-Symfony-Cache GET /: fresh
But something worries me: in the profiler, my database queries are shown as if they have been executed, which shouldn't be the case if the response is sent from the cache. Is it because in dev environment, caching is just simulated in order to do the profiling? Or did I misconfigured something?
Thanks a lot!

Related

RequestHeaderSectionTooLarge: Your request header section exceeds the maximum allowed size

We are using AWS Amplify for our NextJS web app and keep receiving error when ever I try to load the application once deployed to Amplify. Locally there is no issue.
I am using Amplify's default Auth configuration, with basic email and password auth. It looks like it could be related to the Amplify cookie being set in the header but I cannot find any documentation within AWS to prevent this or reduce the amount of information passed with the header. Any help would be appreciated.
I have faced the same issue and was able to solve it. Here's how -
Identify the CloudFront Distribution ID for your app. You can find it in the Deploy logs of your app build console.
Search & open that particular CF Distribution and go to the Behaviours tab.
Select the Default behaviour (5th one in my case) and hit Edit.
Scroll down to the Cache key and origin requests section.
Here you will find settings to control what's included in the headers of the request that goes to the server. In my case, I didn't need any Cookies so I chose None, and it solved the issue for me.
In your case, you can do the same or pick what all info needs to be in the headers.
Check to see if there are any unnecessary cookies for that domain.
I was getting this error (on a site I don't own). I took a look at the request headers and found a very large number of cookies (several dozen) for the site's domain. I cleaned up the cookies which seemed non-critical and the error went away.
As the error implies, the size of the entire request header section is above 8192 bytes. Request headers include the accept headers, the user agent, the cookies, etc. and all combined can get rather large. Large headers look malicious to some WAFs. I once had a single user having trouble with our site. Turns out they were a polyglot and had configured their browser to accept several dozen languages causing their accept-language header to be suspiciously long, and the WAF refused to proxy the request.
I faced the same issue using Nextjs, amplify and an external Auth provider.
The problem is that AWS S3 service has a request header maximum allowed size of 8192 bytes, so when ever you try to access the static generated pages of Nextjs it returns that error. This has already been asked here
In my case, I was using an external Auth provider and I was able to solve the issue configuring the cookies only for the '/api/' path. That way the Auth cookies are sent only to the Nextjs api endpoints, so your request header is lighter whenever you try to get the static pages.

Error 426 from newsapi.org once I deployed my site on Netlify

While I was trying my project on localhost it was working fine, using https://cors-anywhere.herokuapp.com/ since I got the CORS problem. But once I deployed the site on Netlify, it gave me the error 426 (Upgrade Required), with or without using https://cors-anywhere.herokuapp.com/.
These are the messages that appear on my console:
>Failed to load resource: the server responded with a status of 426 (Upgrade Required)
>Error: Request failed with status code 426
at createError.js:16
at settle.js:17
at XMLHttpRequest.<anonymous> (xhr.js:61)
I have been searching and some people seem to have similar problem to this. I have seen solutions like having my own server to pass the requests, but I don't know how to do it and, correct me if I am wrong, wouldn't that be the same as using https://cors-anywhere.herokuapp.com/?
Newsapi changed their pricing model.
You can't make requests from the browser anymore, you'll have to use a backend. I had the same problem and the easiest way around it was implementing a Node (Express) server.
I guess the free plan simply is not longer available in production.
"Requests from the browser are not allowed on the Developer plan, except from localhost."
Here's the updated plan page..
https://newsapi.org/pricing
Actually newsapi.org api in developer plan is no more working in production.
because in developer plan CORS is only enabled for localhost.
Developer Plan $0
CORS enabled for localhost.
https://newsapi.org/pricing
but if you want to fetch news in production then there is a alternative for this which have 1000 request per month free works in production also .
https://newsapi.in/
Newsapi.in this website provide api that have cors enabled for all origins.
Enjoy...
As others have mentioned, Newsapi no longer allows you to make requests from the browser.
newscatcher has a no-card free tier that allows for 10,000 requests. On top of that, depending on your use case you can even mail them to increase the limits for a short span of time, or to add extra data points.

event espresso CORS is wrong

I was hired to write a wordpress plugin which involves an ajax request to the website's eventespresso api.
I got it working fine locally (calling the live site's api from my local server), but when I activate the plugin on the live site, it throws:
Failed to load http://example.com/wp-json/ee/v4.8.36/events: The
'Access-Control-Allow-Origin' header has a value 'http://opt.local'
that is not equal to the supplied origin. Origin
'http://www.example.com' is therefore not allowed access.
My local domain is "http://opt.local", and the live site is http://example.com.
This error suggests to me that it only wants to allow access from my local setup, and not from the live site, which isn't even cross origin! Maybe I caused it to cache the wrong thing in development?
So a few more tests revealed that the cors settings are correct for everything except the specific route I need.
> curl -I "http://example.com/wp-json"
Access-Control-Allow-Origin: http://example.com
> curl -I "http://example.com/wp-json/ee/v4.8.36"
Access-Control-Allow-Origin: http://example.com
> curl -I "http://example.com/wp-json/ee/v4.8.36/events"
Access-Control-Allow-Origin: http://opt.local
I was able to make it work by using ee/v4.8.35 (a lower api patch version) but hopefully, there is a better solution.
I helped develop the EE4 REST API.
Ya it sounds like some issue where the webserver or a proxy or something is caching the Access-Control-Allow-Origin header.
There's no code in the EE4 REST API that controls that header, that's actually handled by the WP API (on which the EE4 REST API is built).
The relevant code is in wp-includes/rest-api.php in the function rest_send_cors_headers(). That calls get_http_origin(), whose value can be filtered using the filter http_origin.
So you might want to try adding something like
function my_plugin_force_correct_http_origin($http_origin) {
return 'http://example.com';
}
add_filter('http_origin', 'my_plugin_force_correct_http_origin');
that will ensure the PHP code is sending the correct Access-Control-Allow-Origin header.
If that doesn't resolve the issue, I would verify rest_send_cors_headers() is getting called at all (you could temporarily put a line like echo 'called rest_send_cors_headers!';die; inside that function to check).
If it is getting called, and my suggested filter doesn't help, you could try tagging your question with 'wordpress-rest-api'. Also, I would be curious to see if http://example.com/wp-json/ee/v4.8.36/events?limit=50 has the same problem.

Postman is not using cookie

I've been using Postman in my app development for some time and never had any issues. I typically use it with Google Chrome while I debug my ASP.NET API code.
About a month or so ago, I started having problems where Postman doesn't seem to send the cookie my site issued.
Through Fiddler, I inspect the call I'm making to my API and see that Postman is NOT sending the cookie issued by my API app. It's sending other cookies but not the one it is supposed to send -- see below:
Under "Cookies", I do see the cookie I issue i.e. .AspNetCore.mysite_cookie -- see below:
Any idea why this might be happening?
P.S. I think this issue started after I made some changes to my code to name my cookie. My API app uses social authentication and I decided to name both cookies i.e. the one I receive from Facebook/Google/LinkedIn once the user is authenticated and the one I issue to authenticated users. I call the cookie I get from social sites social_auth_cookie and the one I issue is named mysite_cookie. I think this has something to do with this issue I'm having.
The cookie in question cannot legally be sent over an HTTP connection because its secure attribute is set.
For some reason, mysite_cookie has its secure attribute set differently from social_auth_cookie, either because you are setting it in code...
var cookie = new HttpCookie("mysite_cookie", cookieValue);
cookie.Secure = true;
...or because the service is configured to automatically set it, e.g. with something like this in web.config:
<httpCookies httpOnlyCookies="true" requireSSL="true"/>
The flag could also potentially set by a network device (e.g. an SSL offloading appliance) in a production environment. But that's not very likely in your dev environment.
I suggest you try to same code base but over an https connection. If you are working on code that affects authentication mechanisms, you really really ought to set up your development environment with SSL anyway, or else you are going to miss a lot of bugs, and you won't be able to perform any meaningful pen testing or app scanning for potential threats.
You don't need to worry about cookies if you have them on your browser.
You can use your browser cookies by installing Postman Interceptor extension (left side of "In Sync" button).
I have been running into this issue recently with ASP.NET core 2.0. ASP.NET Core 1.1 however seems to be working just fine and the cookies are getting set in Postman
From what you have describe it seems like Postman is not picking up the cookie you want, because it doesn't recognize the name of the cookie or it is still pointing to use the old cookie.
Things you can try:
Undo all the name change and see if it works( just to get to the root of issue)
Rename one cookie and see if it still works, then proceed with other.
I hope by debugging in this way it will take you to the root cause of the issue.

IIS Config - PROPFIND, OPTIONS verbs are ignored and treated as GETs?

I'm attempting to configure a webdav server example application (https://sourceforge.net/projects/webdav/) to run on IIS6 (Win2003 Server). The application runs correctly on my dev machine (Win7, IIS7.5).
When I attempt to map a drive to the DAV share, several requests are issued, including one OPTIONS request and two PROPFIND requests.
In Fiddler, I see that these are transmitted correctly. However, the response is always the content of the default page on the site. If I look at the IIS logs, the requests are logged as GETs instead of OPTIONS or PROPFIND.
UrlScan is disabled, but I went ahead and added OPTIONS and PROPFIND to the list of allowed verbs (since I'm running out of ideas).
Help.
Solved.
Turns out that URLScan wasn't disabled, though it was not listed in the ISAPI filter list in IIS Manager. Just for kicks I renamed the URLScan.ini file, which resulted in an exception when any site on the server was hit.
Rather than removing URLScan completely (following the Prime Directive), I modified the denyVerbs and DenyHeaders sections to allow all of the DAV stuff.
I'll accept an answer from the first person to provide a clear explanation of what security problems this may introduce if put in production.

Resources