Many websites discuss broken images being good warning signs of a possible XSS attack in the pages source code. My question is why so many attackers allow this to happen. It doesn't seem like it would be very much more trouble for an attacker to use an iframe or an unassuming picture to hide their persistent script behind. I could be wrong in assuming that broken images are very common with XSS. Thanks for the help!
Edit: I think XSS could be a misnomer in this case. I understand why an image tag that points to a java script file wouldn't display and be too much trouble to display. I think my question is more related to instances of files uploaded to the server with malicious code in them. I guess that's sort of a second question actually--is that actually XSS or more like an exploit of insecure object references by the server (going by OWASP terms)?
Edit: Here is a nice article describing XSS in detail. It mentions broken images, but it also discusses how to avoid them. I can't find any articles mentioning specific attacks with broken images. I recall reading about a few phishing attacks through email however (in these cases you are absolutely correct about CSRF, Daniel.
The websites that you have been reading may be referring to Cross-Site Request Forgery attacks (CSRF; CWE-352). CSRF attacks are commonly carried out with "broken images" because (1) browsers load images automatically (so the browser automatically makes an HTTP request on behalf of the visitor) and (2) many websites allow users to add images to user-contributed content.
Imagine that a website allowed users to post comments on a blog, and the blog software allowed users to add images to their comments by specifying the URL of an image. There are likely various admin functions of the blog software that are invoked by requesting certain URLs. For example, a comment might be deleted by anyone who is logged in as an administrator if the admin "visited" /comments/delete/# (where "#" is an ID of the particular comment to be deleted). A malicious non-admin will not be able to delete a comment, say comment 7754, by visiting /comments/delete/7754 because he or she is not authenticated. However, the malicious user might try adding a new comment with the content consisting only of the "image" at /comments/delete/7754. If an admin were to subsequently view the comment (simply view the page containing the malicious user's comment), then the browser would automatically request the "image" at /comments/delete/7754. This could cause comment 7754 to be deleted because the admin is logged in.
This example of deleting comments gives you an idea of how some CSRF attacks work, but note that the effects can be a lot more sinister. The CWE page that I linked to references actual CSRF issues with various software that allowed things like privilege escalation, site settings manipulation, and creation of new users. Also, simply requiring POST for all admin functions does not make a website immune to CSRF attacks because a XSS attack could dynamically append a specially-constructed form element to the document and programmatically submit it.
Related
First of all, I'd like to preface this post by stating that I know this is a terrible user experience...
I have a client who would like to prevent site visitors from sharing login credentials.
Because this is a corporate marketing site, social login is not an option.
The client claims that there is a site where upon registration, a cookie is dropped onto the user's device and the user is also given a unique password that will only work on that specific device.
Does anyone know how to make this work using Wordpress? (I'd like to avoid using third party plugins)
This sounds like the use of Single Sign On (SSO) or 2 factor Authenticaton (2FA) will be needed. The SSO Wikipidea page references a cookie based solution for TCP/IP networks https://en.wikipedia.org/wiki/Single_sign-on so perhaps that's how this came up from your client.
Once you identify what your options are with that, based on what your client is using for authentication, then set up may be a bit easier. I think a plugin would save you a lot of time, since this is a pretty elaborate task. This one may do the trick https://wordpress.org/plugins/miniorange-saml-20-single-sign-on/
Regardless it's pretty challenging to prevent the sharing of credentials. SSO may be a deterrent if that gives access to something else that user doesn't want to share. 2FA doesn't prevent a user from sharing the pin thats generated too. Perhaps the only real way is to require an IP match on a device with bio-metric authenticaton.
I am after some advice regarding use of GUIDs from the security perspective. I have developed an ASP.Net application. It provides the user with access to some material, such as documents and photos, that aren't on the web server. These are stored on a file server. I have a 'GetResource.aspx' page which takes the ID of the resource, opens it using System.IO.FileInfo writes it to the response stream and returns it.
So, GetResource.aspx?id=123 would return, say, a picture that the user has access to. Of course, the user could manually enter the URL as GetResource.aspx?id=456 in which case the picture / document etc with that ID would be returned and it may not be one they have permission to access.
So clearly using an integer ID is not adequate. Would using a GUID as the ID provide enough 'randomness' that I could reliably assume the user could never manually enter "GetResource.aspx?guid={A guessed guid}" and ever expect to access a valid resource, including if using a script that made many random guesses per second?
Or, is there no substitute to determining the ID of the user from a Session variable, determining he does actually have access to the requested resource and only then returning it (Which as I write this I'm more and more convinced is the case!).
Thanks
There is certainly no substitute to authenticating the user and seeing if they are authorized to access the resource. What you are proposing here is a method of making it harder for a user to hit on a valid id for a document they are not authorized to view (either by mistake or on purpose).
A GUID is certainly large enough that you would never get "accidental" valid ids in practice. That makes a GUID without authorization checks a system that works great as long as noone is actively trying to break it. On the other hand, authorization checking is a system that would work great even in the presence of active attackers (of course this depends on what the attackers can manage to do).
You should choose between the two approaches depending on the nature of your application (is it public? are the users known and accountable for their actions? how bad would a "security breach" be?).
You should be determining if the user is authorised before blindly serving it if it is protected content.
The GUID does help to some extent, it makes guessing URLs harder, so I'd still recommend using them. But URLs can still be shared (even accidentally). If you are just going to serve up the content anyway regardless of who makes the request then it is of little real protection.
If you think that content is restricted one and having some personal data then you should go with username and password thing.
As part of a webapp I'm building, there is an iframe that allows the currently logged in user to edit some content that will only be displayed in their own logged-in profile, or on a public page with no logged in users.
As that means the content will only be viewable to the user who entered it, or to a user on a public site, does this mean the risk of XSS is redundant? If they can only inject javascript into their own page then they can only access their own cookies yeah? And if we then display that content on a public page that has no concept of a logged in user (on a different subdomain) then there are no cookies to access, correct?
Or is my simplistic view of the dangers of XSS incorrect?
Anthony
Stealing authorization cookie information is actually not the only harm JavaScript injection can bring to other users. Redirects, form submits, annoying alerts and uncountable other bad things can happen. You should not ever trust html content provided by user, and neither display it to others.
To avoid html injection and at the same time allow users to provide html, the general idea is to have the predefined set of html tags, that can bring no harm to other users, for example some text or division paragraphs, but not unchecked images and javascript. You parse provided html and delete all but those tags.
You can use HtmlAgilityPack or any other library that can help you parse html provided by user. Than you can filter out and delete any unwanted source, and leave only safe markup.
Often an attacker will use multiple vulnerabilities when attacking a site. There are a couple of problems with allowing a user to XSS him/herself.
CSRF - A user can visit a malicious site which posts malicious data to his profile and is thus XSSed.
Clickjacking with content - See http://blog.kotowicz.net/2011/07/cross-domain-content-extraction-with.html
Next if that content is displayed on the public page, it could redirect users to different sites containing exploits that automatically take over the users computer, or they could redirect to porn.
I'm thinking of creating a diagnostics page for an ASP.NET app, which would be mostly intended for admin use to get more information about the application for diagnosing problems.
Examples of the info the page might have :
System.Environment.MachineName (might be useful in web farm scenarios)
System.Environment.Version
Environment.UserName
database name
current user's session ID
Some of the info on this page might be sensitive from a security perspective.
If you've done this sort of page before, what sort of security did you put on access to this page ? .
EDIT :
I should add - occasionally it might be useful to see this page whilst logged in as a specific (i.e. real) end user. e.g. say a problem can only be reproduced when logged in as a particular user. Being able to see the diagnostics page for that user might be useful. e.g. knowing the current session ID might be helpful for debugging.
EDIT 2 :
I'm starting to think that this diagnostics page should in fact be two different pages. One to display stuff which is the same for all users (e.g. database name, CLR version), and another for stuff which can vary by session (e.g. browser info, session ID).
Then you could lock down security more for the first page.
Yes, I've added this sort of page before (and found it useful). The security was pretty simple: the page contained a password form. The server-side code checked this password against a configured value and, if correct, displayed the real content and set a value in the user's session to say that they've been authenticated as a developer, so that they're not prompted again next time.
I suppose there was also a little security by obscurity, since the URL of the page wasn't published anywhere.
I was also careful not to reveal anything really sensitive on the page. For example, it allowed viewing our application config values, but masked out anything with "password" in it - hey, if we really want to see the password we can open a remote desktop session to the server.
There's also a couple of other ways you could do this:
If your web application has user authentication, restrict access to this page by checking that the user is flagged as an administrator or belongs to some kind of admin role.
Use a simple if (Request.IsLocal) ... type check, though the downside of this is that you still have to connect to the server and browse the website locally - which might not always be possible. However, this does still have the benefit of being able to easily view key system settings.
Personally, I've used a combination of both methods where a local request always allows access, and non-local requests require an admin user - eg. if (!Request.IsLocal && !IsAdminUser()) throw new SecurityException().
Also, I'm in agreement with Evgeny - be careful not to reveal anything really sensitive on this page (such as application connection strings or passwords).
use forms authentication and setup a user or two with access to that page. that way you can change passwords and revoke access once the site is deployed.
It sounds like you want a robust solution for your error page. I would take a look at open source projects like Elmah (http://code.google.com/p/elmah/) for a good example of a robust error page which includes configurable security. To give you an idea, here is a post on configuring Elmah which takes you through setting up the security. The security I have tested allows me to use my domain credentials to login.
I have a WordPress site. Like with many WordPress sites I see people (probably robots) trying their luck at the login page every once in a while. However, for the past 2 weeks it’s been non-stop at a rate of 400-500 tries a day…
So I went ahead and took the following security measures:
Changed the login URL to something different than the regular /wp-admin.
Limit the number of login attempts per URL and also automatically block any IP trying to login with an invalid username such as “test” or “admin”.
Set up two factor authentication to make sure that even though they tried they would not manage to get in, even if they guessed the username and password.
However that didn’t seem to do much and I’m still seeing a huge number of login attempts, so next thing I did was:
Password protect the login URL itself.
And still I’m seeing the same number of login attempts… now my questions are basically 2:
How are they managing to still try their luck at the login form even if that page is password protected?
Is there anything else I can do about it?
Cloudflare offers a free entry level plan that may help reduce some of this traffic before it gets to your site. Also, their $20/month plan (as of Aug 2017) can be paired with their WordPress plugin to use their built-in WordPress rulesets. CloudFlare also has a few more settings to allow you to put a few more filters and road blocks in front of specific types of traffic.
If you do choose to use CloudFlare with WordPress, be sure you understand exactly how/if you are choosing to push content into the CloudFlare CDN (content delivery network) and how that relates to the content cache on your site.
Standard disclaimer: I have no relationship with CloudFlare except as a customer.