As part of a webapp I'm building, there is an iframe that allows the currently logged in user to edit some content that will only be displayed in their own logged-in profile, or on a public page with no logged in users.
As that means the content will only be viewable to the user who entered it, or to a user on a public site, does this mean the risk of XSS is redundant? If they can only inject javascript into their own page then they can only access their own cookies yeah? And if we then display that content on a public page that has no concept of a logged in user (on a different subdomain) then there are no cookies to access, correct?
Or is my simplistic view of the dangers of XSS incorrect?
Anthony
Stealing authorization cookie information is actually not the only harm JavaScript injection can bring to other users. Redirects, form submits, annoying alerts and uncountable other bad things can happen. You should not ever trust html content provided by user, and neither display it to others.
To avoid html injection and at the same time allow users to provide html, the general idea is to have the predefined set of html tags, that can bring no harm to other users, for example some text or division paragraphs, but not unchecked images and javascript. You parse provided html and delete all but those tags.
You can use HtmlAgilityPack or any other library that can help you parse html provided by user. Than you can filter out and delete any unwanted source, and leave only safe markup.
Often an attacker will use multiple vulnerabilities when attacking a site. There are a couple of problems with allowing a user to XSS him/herself.
CSRF - A user can visit a malicious site which posts malicious data to his profile and is thus XSSed.
Clickjacking with content - See http://blog.kotowicz.net/2011/07/cross-domain-content-extraction-with.html
Next if that content is displayed on the public page, it could redirect users to different sites containing exploits that automatically take over the users computer, or they could redirect to porn.
Related
I have a website that is mostly for anonymous users to access public information on listings pages. A small subset of our user base will have password protected accounts that let them customize the filtering and sorting of information on these pages. The idea is that is that once the user logs in, the site can remember their viewing preferences so when they go to a particular listing it shows up the way they want it to.
Currently we are using Next.JS's Incremental Static Regeneration to serve pre-rendered pages. This is working great for anonymous users.
But I worry that if we add authentication and custom sorting, we would either have one of two issues:
If we keep getStaticProps, logged in users would get a flash of unstyled content as the hydrated page detects that the user is logged in and re-sorts the page content. Or they would get a loading state before actually seeing the content.
If we switch to SSR, we'd get slow authentication checks on the backend for everyone on every page load including for anonymous users, who are the vast majority of our users.
Is there a better way to deal with this? I wonder if, for example, there is a tweak I can add to server.js or something that switches from static to SSR if it sees a session cookie in the request headers.
we'd like to implement OpenGraph on an intranet application, so that when people share a URL from the application into a social network (Yammer, Jive, Chatter ...), it would show a nice thumbnail, description, and so forth.
The problem: because Yammer is not connected to the intranet, it follows the redirections and serves OpenGraph data from the login page...
Is there a way to behave properly in such a case ?
We've come up with 3 possible solutions:
Implement an unknown but possibly existing part of the OpenGraph protocol, to serve private pages, ignoring as well as possible the redirections
Doing some kind of cloaking - detecting the agent is Yammer or Chatter, and serve a dedicated page
Keeping the OpenGraph meta data in some kind of session, and serves them from the login page (where the social network eventually ends up...)
Thanks for your input if you've been confronted to this problem too !
Third solution sounds like the best one. Since it is allowed (by your rules) to show part of a data outside your intranet, you have to add individual thumbnail and description in the meta tags of login page.
If a user is logged in, he can see all the data from page yoursite.com/username/post123/ (as usual),
But if user is not logged in (like any bot), he will see login form (with thumbnail and description in meta tags) on the same address yoursite.com/username/post123/
So all bots will see proper OG data, all users will be able to login as usual.
(i.e. you shouldn't redirect not logged in visitors to the page yoursite.com/loginpage. You have to show a login form on all such pages)
I've got some things in my mind, I thought I'd ask the veterans here. I'm creating a website using Razor syntax and WebMatrix. I'm currently implementing a user login system into it. So my questions are:
In WebSecurity, when a token is generated (for creating new account, or recovering password, etc.), is this token a public key? Can it be safely emailed to the user over unsecured network or email. Is it a good practice (or useful) to further encrypt this token?
I've set my secured pages to not to cache on web browser, i.e. pages which are accessed by user after he signs-in with his password. I think its a necessary action because when a user logs out, I don't want the user to press the browser's back button and see the secured pages again. So I set all the secured pages' expiry as follows:
Response.Expires = -1;
Response.Cache.SetNoServerCaching();
Response.Cache.SetAllowResponseInBrowserHistory(false);
Response.CacheControl = "no-cache";
Response.Cache.SetNoStore();
My question on above is that if I set my pages to expire immediately, the browser does not cache anything and reloads the page every time user visits it, does it mean that the browser will not even cache the linked style sheets, script files and images? I've set my images to preload so that the website's presentation works smoothly; will the immediate-expiring of webpage cause these images and everything to be loaded all over and over again on each page?
Thanks.
It's not a "public token", in the sense that anyone who gets access to that token can use it to reset the user's password and log in. So it does need to be sent securely, and the reset link should require SSL.
No, the setting of cache expiry on specific pages will not affect the caching of other content. You can set the cache policy/headers of static content using IIS manager, or in the web.config.
Many websites discuss broken images being good warning signs of a possible XSS attack in the pages source code. My question is why so many attackers allow this to happen. It doesn't seem like it would be very much more trouble for an attacker to use an iframe or an unassuming picture to hide their persistent script behind. I could be wrong in assuming that broken images are very common with XSS. Thanks for the help!
Edit: I think XSS could be a misnomer in this case. I understand why an image tag that points to a java script file wouldn't display and be too much trouble to display. I think my question is more related to instances of files uploaded to the server with malicious code in them. I guess that's sort of a second question actually--is that actually XSS or more like an exploit of insecure object references by the server (going by OWASP terms)?
Edit: Here is a nice article describing XSS in detail. It mentions broken images, but it also discusses how to avoid them. I can't find any articles mentioning specific attacks with broken images. I recall reading about a few phishing attacks through email however (in these cases you are absolutely correct about CSRF, Daniel.
The websites that you have been reading may be referring to Cross-Site Request Forgery attacks (CSRF; CWE-352). CSRF attacks are commonly carried out with "broken images" because (1) browsers load images automatically (so the browser automatically makes an HTTP request on behalf of the visitor) and (2) many websites allow users to add images to user-contributed content.
Imagine that a website allowed users to post comments on a blog, and the blog software allowed users to add images to their comments by specifying the URL of an image. There are likely various admin functions of the blog software that are invoked by requesting certain URLs. For example, a comment might be deleted by anyone who is logged in as an administrator if the admin "visited" /comments/delete/# (where "#" is an ID of the particular comment to be deleted). A malicious non-admin will not be able to delete a comment, say comment 7754, by visiting /comments/delete/7754 because he or she is not authenticated. However, the malicious user might try adding a new comment with the content consisting only of the "image" at /comments/delete/7754. If an admin were to subsequently view the comment (simply view the page containing the malicious user's comment), then the browser would automatically request the "image" at /comments/delete/7754. This could cause comment 7754 to be deleted because the admin is logged in.
This example of deleting comments gives you an idea of how some CSRF attacks work, but note that the effects can be a lot more sinister. The CWE page that I linked to references actual CSRF issues with various software that allowed things like privilege escalation, site settings manipulation, and creation of new users. Also, simply requiring POST for all admin functions does not make a website immune to CSRF attacks because a XSS attack could dynamically append a specially-constructed form element to the document and programmatically submit it.
My UI prototype requires me to show the sites login info all the time. Either I should show the usual username and password textbox or "you are logged in as". The last bit don't have to be secure, as it's only info to the user, nothing I will use server side. But the first part should send secure to the server.
It seems that I would have to use https for all pages on the site then. I would like to only use ssl for the things that are required to be secure.
One way is putting the login information into a https://../login.aspx and show it on my mainpage as an IFrame.
One disadvantage I can see is that the user won't know that https is being used, unless they read the IFrame src in the source code.
What do you think?
Are you using the built-in asp.net login controls or do you just use two textbox controls?
You could use your own form tag (not runat="server") with the action attribute set to "https://..." and just use two html input tags and a button to log on.
Again this wouldn't show the user that there credentials are secure when logging in.
Because of some recently discovered SSL attacks, it is always preferable to also put the logon form on a https:// page. Otherwise a hacked can intercept the http stream and change your form action from "https://..." to "http://..." and then sniff the credentials.
Another option would be to take advantage of the PostBackUrl property of the Button control.
You would need to create your own login LayoutTemplate to take advantage of this though. You would then be able to add the secure scheme to the current page URL, and set the PostBackUrl property of the submit button to that.
This would have a similar issues to your iFrame solution (the user wouldn't see the padlock symbols), however you would have the advantage that you wouldn't be using iFrames.
Another issue using an iFrame is the affects that they can have on the page:
They are a separate request, but can cause a block on the JavaScript PageLoad event firing.
The login form would only postback within the iFrame, so you'd need to refresh the parent page when the user is successfully logged in to remove it.
Additionally to that, errors would be returned in the iFrame, probably not leaving you much space for displaying the form as well, etc.
You've hit the major problems. You want the login, which needs to be on every page to use SSL, but you don't want the entire page to be SSL.
This is more of a business decision at this point than anything else. Would you rather your customers feel more secure about visiting your site, or do you want the login information present on every screen?
If you need to have both, you may need to also look at making your entire site SSL.