Check if URL targets Action (vs. file/etc) - asp.net

I'm trying to determine the best way to determine if a URL (as seen in the global.asax) is for an action. I'm wanting to exclude EVERYTHING else...i.e. a request to a bundle would fail as well as a request for a file.
It seems clunky and dirty to check to make sure the request isn't for a file/directory/bundle/etc. I'm hoping to be able to instead JUST check to see if it's an action, but I'm having issues coming up with what that test would look like.
Just FYI, in case it's relevant. I'm working on some internationalization of a site and I'm needing to filter the Request objects so that I only fiddle with the one for the initial request.

Related

User Identity Info

I've been messing around with creating my own implementation of an AspNet.Security.OAuthProviders by copying the GitHub example. Have a few questions..
First, I successfully authenticate but when I get back my User.Identity.Name is empty. I don't see that information coming back from my provider. Noob question I imagine, but do I have to explicitly request the information I want back? If so, how do I know what to ask for.. I'm kind of working blindly.
Second, in the GitHub example of the Handler, CreateTicketAsync immediately makes a call to the UserInformationEndpoint. In my use case, after getting authorized I want to go to a page that has some links to some api requests that will use the acquired authorization, rather than do it right away. I'm not sure if there is an example for that or I'm making incorrect assumptions and going about this the wrong way.
This is entirely supposed to be for demo purposes as a "how to" for other developers so I want to make sure I do things the correct way.

Determine if requester is an Ajax call and/or is expecting JSON (or another content type)

I have solved a problem with a solution I found here on SO, but I am curious about if another idea I had is as bad as I think it might be.
I am debugging a custom security Attribute we have on/in several of our controllers. The Attribute currently redirects unauthorized users using a RedirectResult. This works fine except when calling the methods with Ajax. In those cases, the error returned to our JS consists of a text string of all the HTML of our error page (the one we redirect to) as well as the HTTP code and text 200/OK. I have solved this issue using the "IsAjaxRequest" method described in the answer to this question. Now I am perfectly able to respond differently to Ajax calls.
Out of curiosity, however, I would like to know what pitfalls might exist if I were to instead have solved the issue by doing the following. To me it seems like a bad idea, but I can't quite figure out why...
The ActionExecutingContext ("filterContext") has an HttpContext, which has a Request, which in turn has an AcceptTypes string collection. I notice that on my Ajax calls, which expect JSON, the value of filterContext.HttpContext.Request.AcceptTypes[0] is "application/json." I am wondering what might go wrong if I were to check this string against one or more expected content types and respond to them accordingly. Would this work, or is it asking for disaster?
I would say it works perfect, and I have been using that for years.
The whole point use request headers is to be able to tell the server what the client accept and expect.
I suggest you read more here about Web API and how it uses exactly that technique.

How to detect and possibly drop/sanitize http request parameters/headers to prevent XSS attacks

Recently, we found that some of our SpringMVC based site's pages, that accept query parameters, are susceptible to XSS attacks. For e.g. a url like http://www.our-site.com/page?s='-(console.log(document.cookie))-'&a=1&fx=326tTDE could result in the injected JS to be executed in the context of the rendered page. These pages are all GET-based, no POST requests are supported.
These parameters are written in the markup in numerous places, so doing an HTML encode (in all these places) would be tedious and require more code changes. In some cases, they are also written to cookies.
Ideally, we would want to detect them as early as possible, say inside a Servlet Filter/Spring Interceptor and then for each request parameter, decide if we want to drop it all together, or sanitize it in some way, before it's available to the rest of the application. We would want this decision to be configurable as well, so that the approach to handle a particular request parameter can be modified over time without significant code change.
Now, since these are request parameters that we want to potentially modify, we would probably have to use an approach similar to the one described here, if we go the Filter way. We would potentially want to sanitize HTTP Request Headers similarly too.
So, what would be the most flexible/minimum overhead way to handle this situation? Would ESAPI be able to both detect and sanitize them, in a configurable way? It's not clear from its API as to what is possible. We would definitely not want to hand-roll regexes to do this. Also, would a Filter be the right place to handle this?
Thanks.

how to handle download request from a WebView using WebResourceRequestFilter blackberry Cascades

i want to handle any download request coming from Webview. how it is possible ? the documentation https://developer.blackberry.com/native/reference/cascades/bb__cascades__webresourcerequestfilter.html and https://developer.blackberry.com/native/reference/cascades/bb__cascades__webdownloadrequest.html are describing the parameters but couldn't figure out how to do it.
Your question is not clear on what you don't understand. Remember this is not a training forum, the idea is that you should try things, review the documentation and then ask specific questions to get the best out of a forum.
Moreover it is not clear whether you are trying to handle the download request at the Server, or capture the request before the download attempt leaves the BB.
I'm going to assume you want to display a web page on the BlackBerry but make sure that any resource requests that the page generates, are filtered by your program, so that you can supply the data (assuming you have it).
I implemented something like this a while ago and remember that it was not simple to figure out what was going on, but I played with it a bit and it all made sense.
I don't remember using WebDownloadRequest and can't really see how it helps in this case.
The key is WebResourceRequestFilter. You create your own WebResourceRequestFilter making sure you implement the required methods. Then you use WebPage::setNetworkResourceRequestFilter(WebResourceRequestFilter*) to make sure the webpage will ask your WebResourceRequestFilter for its resources. The first method the web page invokes is filterResourceRequest(), and the return from this invocation determines which other methods in your WebResourceRequestFilter, the Webage will invoke.
I suggest you implement a WebResourceRequestFilter, put some debugging in filterResourceRequest(), but always return FilterAction Accept, which means the web page will use its normal processing to obtain the resources. Then try various other FilterAction return values and see what happens...

What is the difference between GET and POST in the context of creating an AJAX request?

I have an AJAX request that sends a GET:'getPendingList'. This request should return a JSON string indicating a list pending requests that need to be approved. I'm a little confused about whether I should be using a GET or POST here.
From this website:
GET requests can be cached
GET requests can remain in the browser history
GET requests can be bookmarked
GET requests can be distributed & shared
GET requests can be hacked (ask Jakob!)
So I'm thinking: I don't want the results of this GET to be cached because the pending list could change. On the other hand, using POST doesn't seem to make much sense either.
How should I think about GET and POST? I've been told that GET is the same as a 'read'; it doesn't (or shouldn't) change anything on the server side. This makes sense. What doesn't make sense is the caching part; it wouldn't work for me if someone else cached my GET request because I'm expecting the data to change.
Yahoo's best practices might be worth reading over. They recommend using GET primarily for retrieving information and using POST for updating information. In a separate item, they also recommend that do you make AJAX requests cachable where it makes sense. Check it out, it's a good read.
In short, GET requests should be idempodent. POST requests are not.
If you are altering state, use POST - otherwise use GET.
And don't forget, when talking about caching with GET/POST, that is browser-caching.
Nothing stopping you from caching the data server-side.
Also, in general - JSON calls should be POST (here's why)
So, after some IRC'ing, it looks like the best way to do this is to use GET (in this particular instance), but to prevent caching. There are two ways to do this:
1) Append a random string to your GET request.
This seems like a hacky way to do this but it sounds like it might be the only solution for IE: Prevent browser caching of jQuery AJAX call result.
2) In your response from the server, set the headers to no-cache.
It's not clear what the definitive behavior is on this. Some folks (see the previous link) claim that IE doesn't respect the no-cache directives. Other folks seem to think that this works: Internet Explorer 7 Ajax links only load once.

Resources