Is there a way for the consumer website (eg nytimes.com) to assure itself that an iframe it loads will NOT be able to communicate with ahy servers, and only has access to postMessage? This can be done from the server hosting the iframe’s document. But I don’t want to have to trust that server.
Here is what I need this for: I want to store non-extractable asymnetic keys using subtle-crypto, load some static HTML with inline JS that was audited by third parties, I am sure that’s what was loaded using an SRI, and finally pass some data usibg postMessage to it, and CLOSE THE DOOR in that sanbox by overriding postMessag, to GUARANTEE to the user of the user-agent that any data decrypted and displayed from that point on cannot be leaked to anyone else (assuming the user agent follows web standards).
How will the USER know they can trust the iframe? Because the iframe would display some familiar string they chose, decrypted by the same private key, after the door is closed. Since it’s not extractable, no server can decrypt it so it must be the audited safe HTML + JS environment trusted by the user.
But how can the user, and the embedding site, verify and be SURE what the content security policy of the iframe is??
Ok turns out HTML has the “http-equiv” meta tag which can set the Content Security Policy of that HTML document. And the enclosing site can use SRI to make sure it is loading a document it previously audited.
So that takes care of trust by the enclosing site. However I am not sure how a user of a mainstream browser can verify
The loaded iframe document is the same as a document loaded before (subresource integrity visible for user)
The loaded iframe document has the right Content Security Policy, unless they View Source.
Perhaps someone can address the two things above, how can the user trust the loaded document and that the door is CLOSED. The Web makes it very hard to not have to trust servers on the internet to collude and change up code anytime.
Related
One of my clients has a cross domain analytics set up.
Everything works well, but there are different behaviors when user gives full cookie consent and when he allows only strictly necessary cookies.
Behavior in case of full cookie consent:
GA stores data into cookies i.e. _ga cookie _ga_ID can be found in console cookie tab.
Behavior in case of only strictly necessary cookie consent:
GA stores some data in URL, for example:
https://www.example-page.com/?_gl=1*XXXXXXX*_up*MQ..*_ga*ZZZZZZZ.*_ga_YYYYYYY*YYYYYYY..
According to google documentation the second case is default behavior. And cross domain measurement is working when _gl param is added to url.
What I do not understand is why are URL params not added everytime and only when some cookies are not accepted, so I would like to get better understanding of this.
There is also a possible issue which I do not understand and that is:
GA params are added to url also when user is just switching between subsites in the same domain i.e. from www.example-page.com/home-page to www.example-page.com/about-page. If I understand correctly this should not happen as I am staying within domain.
The questions I am most interested in are:
How is GA determining if it should store its data as cookies or push it to url?
Where are these parameters stored before user redirect first time? Is it part of datalayer / google_tag_manager global variables?
Is there way to store the params somewhere else than in url when full cookie consent is not granted?
Is adding of GA params to url even when staying withing same domain a correct behavior?
Project details:
Site is running on Wordpress and use OneTrust for cookie management.
EDIT: Issue with URL resolved.
In my case this issue was caused by update of consent mode template (gtm-templates-simo-ahava). Reverting to previous version fixed the problem. Possible cause of the problem can be maybe connected to this pull request in template repository
How is GA determining if it should store its data as cookies or push it to url?
Pushing the data to url is the mechanism of cross-domain tracking. You set a list of domains that cross-domain tracking should work for. This is likely your problem here. You're not supposed to set subdomains, only TLDs in vast majority of cases.
Where are these parameters stored before user redirect first time? Is it part of datalayer / google_tag_manager global variables?
This data is stored in cookies before the user goes to a different domain. If cookies are deleted, then it's stored in the JS scope of the GA library. This implies that they would be erased and regenerated on JS context loss. Loss on a page unload, regeneration on a page load.
Is there way to store the params somewhere else than in url when full cookie consent is not granted?
Well. Yes. But very tricky and expensive. And the immediate question is why would you do that. This would defeat the purpose of blocking the cookie. Natively, GA doesn't support other methods of passing the value, but if you're into tinkering, you can either store the value on your backend and then retrieve it, using some "primary functionality" cookie. Another option is using third party server's cookies, but that would defeat the purpose even more.
Is adding of GA params to url even when staying withing same domain a correct behavior?
No, it's most likely a mistake.
Now, you really asked all the right questions, so I don't have much to add, except that disabling your primary anonymized behavioral tracking is usually a lazy "safe" choice. And lazy here implies wrong.
Normally, larger corps don't block primary tracking. They only block third party marketing-related tracking. Basically, pixels. They consider their main analytics part of the primary functionality, which is a strong case given that main analytics data is often used in debugging, performance measurement and even for app security audits.
Finally, using onetrust or a similar solution to completely manage your tracking is sub optimal. They basically just destroy all "offending" cookies all the time. This will mess up your behavioral data very significantly.
The proper way to use consent management systems is declaring user consent choice in your tag management system and then in it, block rules/tags from firing in case the consent is not given. You normally just carefully block marketing tags there based on consent. Remember, consent management systems are only deleting cookies. Because that's trivial. They don't block network requests. Absence of cookies may not prevent the data from being sent, often even uniquely identifying the client, using the primary cookie's user id, allowing to match the activity to the backend database.
I've been asked if there's any way I can link to a resource on a site without making that resource visible via an external link.
The client wants a price list only available via a link on a page on the site itself. Is this possible?
Well, the link will be visible, but if it's a link to something nobody else is authorized to see then only authorized users would be able to see it.
For example, you might link to something which requires authentication. When anybody clicks on that link, they're prompted for that authentication and are validated before the content is returned to them. If only this particular client is authorized, nobody else would see the content.
You might even link to a URI which is only physically accessible by that particular client. For example, a file on that client's machine. Something like this, for example:
click here
Only that client has that file, so the link would fail for anybody else.
Either way, the link isn't the issue. The access to the resource being linked to is the issue. As long as that access is protected, nobody else can see it.
Of course, as an added UX concern you might also conditionally only display the link if that same authorization is available. You'd still want to protect the resource itself, since otherwise it would just be "security through obscurity", but you should also really only show the link if the user is expected to be able to access it.
I am trying to figure out how to store some user information which will control content visibility in a way that will:
Not require constant trips to the server to query SQL to see what the user does/does not have access to AND
The user cannot edit in the browser's developer tools/console
Cookies, Query Strings and even HTML5 Local Storage or SQLite are all great storage options, but they can all be edited by a tech savvy user. How do I control content based on a user's security level while limiting queries and preventing users the ability to hack around it?
The only way to prevent the user from seeing specific content is to validate their access to the content server-side and not render it to them. Any client-side validation can be circumvented. Even if you devise a way to locally store information that the user can't see, you'd still be sending the content to them and using client-side code to check that value.
The user can see any content you send them.
You don't necessarily need to make constant trips to the SQL database to check roles and permissions. You can persist some cached roles and authorizations server-side, such as in the session state, and validate against those for the life of the user's session. At that point you're not incurring a performance cost because the user is requesting pages anyway. With each page request, you would simply determine what the user can or can't see in the response.
As part of a webapp I'm building, there is an iframe that allows the currently logged in user to edit some content that will only be displayed in their own logged-in profile, or on a public page with no logged in users.
As that means the content will only be viewable to the user who entered it, or to a user on a public site, does this mean the risk of XSS is redundant? If they can only inject javascript into their own page then they can only access their own cookies yeah? And if we then display that content on a public page that has no concept of a logged in user (on a different subdomain) then there are no cookies to access, correct?
Or is my simplistic view of the dangers of XSS incorrect?
Anthony
Stealing authorization cookie information is actually not the only harm JavaScript injection can bring to other users. Redirects, form submits, annoying alerts and uncountable other bad things can happen. You should not ever trust html content provided by user, and neither display it to others.
To avoid html injection and at the same time allow users to provide html, the general idea is to have the predefined set of html tags, that can bring no harm to other users, for example some text or division paragraphs, but not unchecked images and javascript. You parse provided html and delete all but those tags.
You can use HtmlAgilityPack or any other library that can help you parse html provided by user. Than you can filter out and delete any unwanted source, and leave only safe markup.
Often an attacker will use multiple vulnerabilities when attacking a site. There are a couple of problems with allowing a user to XSS him/herself.
CSRF - A user can visit a malicious site which posts malicious data to his profile and is thus XSSed.
Clickjacking with content - See http://blog.kotowicz.net/2011/07/cross-domain-content-extraction-with.html
Next if that content is displayed on the public page, it could redirect users to different sites containing exploits that automatically take over the users computer, or they could redirect to porn.
On some websites, when you want to login, you need to enter a captcha as well. If I want to provide support for an user to enter a captcha into my application ( which will then log into the website ), how would I do this?
My problem is that the link to the captcha image is like this: example.com/captcha , and it serves a different image each time it's accesed.
My approach is like this:
request page
download image
show image to user
user inputs login information
application logs in
The thing is, if you download the image in order to show it to the user, you're actually receiving a different image than the one generated when the page was loaded, right? How can I get to the image that was generated when the page was loaded, so that when I show it to the user, it's the correct one?
The question is language agnostic.
I think your problem is about sessions, the session your app downloading the image and the session your app submiting the login form may not be the same session, then your captcha will never be correct, you should maitain the session between requests, normally is some cookie set by the website.
By design, most captcha will always give you a different image. No way to work around that fact.
The first thing to do, is to open up fiddler. That way you can see what the browser is doing so that it can autenticate & remain autenticated.
It usually comes down to a cookie being sent. So what you need to do is to hold the cookie on your client app, and have all the requests sent with that cookie. Different platforms provide features to do so, but I'm sure a quick search will show you how.
Remember to pay attention to all being exchanged in fiddler, you need to make sure your apps triggers the same. Besides cookies, pay attention to any hidden field a js might set on the form.
It sounds like you're trying to invent a captcha solution yourself. Have you considered using reCAPTCHA? It's free.
Can you be a bit more specific about your situation? From what you've said, I'm assuming the following:
You have a "client GUI app" that logs in to a third-party site. Is this a web-app, or a desktop/standalone application? In what language is it written?
Your app contacts the third party site and downloads the Captcha image. This image is then shown to the user.
The user enters the captcha phrase and submits it to your app. Your app then submits this phrase to the site for validation. This is where sessions come in. Assuming the remote site uses cookie-based session tracking, you will need to send the same cookie to the third-party server with this submission as you do when the image was downloaded (in the step above). This allows the server to match your submission to the correct image it sent. Precisely how you do this depends on what language you've written your app in and the precise structure of it all. Without more information, a more specific solution is impossible.
The image that's generated is also the image served to the user. Your 'main' html page doesn't/shouldn't generate the image, it only embeds it using the image tag.
You could pass a token of some kind with the captcha image, perhaps appended to the filename such as captcha-0ad719bef61bc6a0.jpg and the appended data could link into a temporary table in a database server side that has the correct answer. This would allow you to check things were ok without passing both the image and answer across to your application.
I'm not sure if I entirely understand this question, but wouldn't you simply store the captcha locally after requesting it from the server, and then embed the local image from the client application, while storing any necessary session captcha data that will allow the captcha to be validated on post, assuming the user input is correct?
If the problem is that the captcha changes everytime you request it, just request it only once.
Can you offer any more clarification if this wouldn't apply to you?
It depends from capcha to another captcha. Maybe you need to use sessions or cookies or some captcha image filename. Show the page with that captcha.