What value should I send for `SameSite = None` and` Secure`? - iframe

Chrome update has given me great trouble
So far, I still haven't found anyone's answer in Chrome 80 that correctly answers SameSite issuess.
Leave the following questions for me and us.
In my html page, there is only provided by YouTube.
However, in Chrome, 'cross-site requests if they are set with SameSite = None and Secure' are displayed.
What should I do?
I only did YouTube iframe, what value should I send cookie value and session value?
I want a direct solution.
A fake method like 'cross-site-cookie = name' doesn't help me.
I need a direct way to experience.
I want about php version <7.3 and I want to solve the problem with php or javascript or .htaccess etc.
only youtube iframe tag

Chrome's new cookie policy requires applications to be served over HTTPS, if some iframe content involves usage of secure cookies.
This rule is enforced to disable unsafe requests accross sites, and reduce the risk of Cross Site Request Forgery (CSRF).
You can read more about possible use cases in this post:
https://web.dev/samesite-cookie-recipes/

Related

Preventing Safari prefetching web pages (server side)

When typing a web link into the safari URL field, the browser attempts to prefetch all links it has previously seen before, both GET and POST.
This causes each and every web link a server supports that is listed in the dropdown as a possible completion to be activated. This is problematic. For example, if a web site has authentication with an /auth/logout link for logging out, then this can cause the link to be activated if it appears in the dropdown, logging the user out unintentionally.
Many browsers send a specific header (eg. 'Purpose: Prefetch' in chrome) that allows the server side to filter prefetch/preload requests (eg. return a 503) but Safari doesn't seem to send any distinguishing header field. It also seems to try to prefetch POST requests, which seems very broken to me. Get requests are notionally at least idempotent, but POST requests are supposed to be understood to be data changing.
Has anyone got a solution to this? Please don't suggest that the browser preload feature can be turned off by the end user - that ISN'T a solution from a service delivery perspective.
Has anyone got an explanation as to why browsers would do this and NOT signal the purpose in a header field? (I get why prefetching is a useful Ux capacity, but not why its useful while typing URLs, especially for URLs already previously downloaded and thus capable of returning prefetching metadata that would allow a server to selectively disable the capability where appropriate) From what I can tell, this kind of functionality started to appear with header fields included, but some browsers have removed this signature. why? It seems to be dreadfully broken to me.
thanks.

Last-modified cache does not show when user has logged-in

I am using the last-modified HTTP header to help browsers with caching and I have noticed an annoying problem.
If a user visits a page BEFORE he has logged in, then the browser is showing the cached page even after the user logs-in. This means he is unable to see his log-in information (profile pic, notifications etc) in the header until he visits a page on the site he has not visited before.
Because the content of the actual article itself has not changed since his first visit, he is served up the same page even if he logs-in.
I have tried checking to see if the user has just logged-in (using a SESSION.LoggedIn variable), and then use the current DateTime for Last-Modified, Expires and Cache-Control to tell the browser to serve up a fresh copy of the page but it doesn't work on the Android browser. It just serves the cached version again. What this means is that the user cannot tell that they have logged-in because their name and other credentials don't appear at the top of the page.
How do I use HTTP header caching effectively and also take care of people visiting the same page as both logged-in and anonymously? The logged-in information sits in the header of the site (just like on SO) so is there a way not to cache the siteheader but the rest of the page?
Use a tool like firebug to see the network traffic for a URL. You'll notice that it is 'file' objects: html files, javascript files, css files, images, etc.
I don't think that you can cache a div (or other page layout construct) very easily.
It has been a while since I attempted to use the last-modified HTTP header for caching. I ran into similar problems that you have. Browser implementation/compatibility wasn't 100%. I've also used last-modified in an attempt to inform search engine spiders that files have changed. That didn't work very well either. Eventually I removed all of my attempts at last-modified caching/hinting and just allow the web server and browsers to deal with it.
Eventually I ended up spending a lot of time optimizing database queries, database indexes, and in a few cases implemented the cachedwithin attribute of cfquery tags. This attempt at improving site performance has worked better for me.

How to exploit HTTP header XSS vulnerability?

Let's say that a page is just printing the value of the HTTP 'referer' header with no escaping. So the page is vulnerable to an XSS attack, i.e. an attacker can craft a GET request with a referer header containing something like <script>alert('xss');</script>.
But how can you actually use this to attack a target? How can the attacker make the target issue that specific request with that specific header?
This sounds like a standard reflected XSS attack.
In reflected XSS attacks, the attacker needs the victim to visit some site which in some way is under the attacker's control. Even if this is just a forum where an attacker can post a link in the hope somebody will follow it.
In the case of a reflected XSS attack with the referer header, then the attacker could redirect the user from the forum to a page on the attacker's domain.
e.g.
http://evil.example.com/?<script>alert(123)>
This page in turn redirects to the following target page in a way that preserves referer.
http://victim.example.org/vulnerable_xss_page.php
Because it is showing the referer header on this page without the proper escaping, http://evil.example.com/?<script>alert(123)> gets output within the HTML source, executing the alert. Note this works in Internet Explorer only.
Other browsers will automatically encode the URL rendering
%3cscript%3ealert%28123%29%3c/script%3e
instead which is safe.
I can think of a few different attacks, maybe there are more which then others will hopefully add. :)
If your XSS is just some header value reflected in the response unencoded, I would say that's less of a risk compared to stored. There may be factors to consider though. For example if it's a header that the browser adds and can be set in the browser (like the user agent), an attacker may get access to a client computer, change the user agent, and then let a normal user use the website, now with the attacker's javascript injected. Another example that comes to mind is a website may display the url that redirected you there (referer) - in this case the attacker only has to link to the vulnerable application from his carefully crafted url. These are kind of edge cases though.
If it's stored, that's more straightforward. Consider an application that logs user access with all request headers, and let's suppose there is an internal application for admins that they use to inspect logs. If this log viewer application is web based and vulnerable, any javascript from any request header could be run in the admin context. Obviously this is just one example, it doesn't need to be blind of course.
Cache poisoning may also help with exploiting a header XSS.
Another thing I can think of is browser plugins. Flash is less prevalent now (thankfully), but with different versions of Flash you could set different request headers on your requests. What exactly you can and cannot set is a mess and very confusing across Flash plugin versions.
So there are several attacks, and it is necessary to treat all headers as user input and encode them accordingly.
Exploitation of xss at referrer header is almost like a traditional reflected xss, Just an additional point to make is "Attacker's website redirects to victim website and hence referrer header with required javascript will be appended to the victim website request".
Here One essential point that needs to be discussed is Why only with IE one can exploit this vulnerability why not with other browsers?
Traditional answer for this question is 'Chrome and firefox automatically encodes URL parameters and hence XSS is not possible..' But interesting thing here is when we have n number of bypasses for traditional xss bypasses. why can't we have bypasses for this scenario.
Yes.. We can bypass with following payload which is same way to bypass HTML validation in traditional payload.
http://evil.example.com/?alert(1)//cctixn1f .
Here the response could be something like this:
The link on the
referring
page seems to be wrong or outdated.
Response End
If victim clicks on referring page, alert will be generated..
Bottomline: Not just only IE, XSS can be possible even in Mozilla and Firefox when referrer is being used as part of href tag..

How to force a browser to get new headers without declaring expires?

We have an established site that is now being effected by CSP rules. I’ve added all the scripts we need to the Content-Security-Policy header.
When visiting the site using private browsing or a device that hasn’t been to the site before, I get the new CSP header and everything works.
However, users that have been to the site before get the old headers, and they get CSP warning.
NB I cannot use expire 0 or similar as the browsers are not looking for the new headers, so never know that the headers have expired.
I’m looking for a way to tell the browser “hey, you should checkout my cool new headers because they’re new”.
Turns out I was being foiled by Local Storage that was overwriting the CSP header. Even clearing the cache doesn’t solve the problem as Local Storage remains.
Hope this helps somebody else!

session lost on redirect

I have a web app that is being hit by facebook. The login page retrieves the keys that I need and sets some session variables. When the server then redirects the user to the next page, the session information is lost. I’m running the IIS engine on vista ultimate at the moment, the app pools don’t matter because I’m using a state service and I’m still losing the session state. I’ve tried both the overloaded method of the response.redirect function and also adding a header to the page to force the redirect and none of this seems to work. Does anyone have any ideas of what I’m missing?
I’ve tried both of these:
Response.Headers.Add("refresh", "3;url=Dashboard.aspx")
And
Response.Redirect("Dashboard.aspx", False)
[EDIT]
So i just did a little experiment and well it turns out that when I hit the url directly from the facebook page I get the problem, but when i copy the url for the IFrame into a new browser window and try it it works fine.
[EDIT]
So I found an article on this and after addin gthe header the problem was solved (for now)
http://support.microsoft.com/kb/323752
Response.AddHeader("P3P: CP", "CAO PSA OUR")
when I hit the url directly from the facebook page I get the problem, but when i copy the url for the IFrame into a new browser window and try it it works fine.
If you're in an iframe, any cookies you set are “third-party cookies”. Third-party cookies may be subject to more stringent conditions than the normal “first-party” cookies you are setting when the user is directly on your site. This can be due to different browser default cookie handling or because the user has deliberately configured it like that. (And for good reason: many third-parties are unpleasant privacy-invading advertisers.)
In particular, in IE6+ with the default settings, you cannot set a third-party cookie unless you write a P3P policy promising that you will be a good boy and not flog your users' data to the nearest identify thief.
(In practice of course P3P is a dead loss, since there's nothing stopping the site owner from just lying. Another worthless complication that provides no actual security. Yay.)
I'd try running Fiddler and see if your session cookie is being sent properly with the response when interacting with your app via Facebook.
The session depends also on cookie support by the client. When you say the app "is being hit by facebook" are you sure that by what ever means they are "hitting" you they are supporting cookies?
Response.Redirect and refresh don't carry session. Server.Transfer() can but loses the ability to transfer to other servers/sites.

Resources