I feel like I'm missing something basic, but I've googled and explored all around the interface and I can't find out how to remove multiple URL parameters at once without using the "Undo" or removing the request in its entirety. Basically I have requests with around 50 or more url parameters, and since Paw doesn't show these in the URL field, the only way I can see to remove them is being using the little minus symbol by each one, but this is quite tedious when you have lots. How do I remove multiple ones at once?
I'm late to the party, but this has been irritating me for a while now as well. Would love to see a batch delete action added for url params, but for now I create a version of the request with no params, then duplicate it when I need to start with a clean request. Definitely can clutter up the request list, but they could be thrown into a subfolder or some such and duplicated there if a lot of them might be needed. Either way, it's faster than always having to manually remove params - in my case just 15 or so but that's still an aggravation.
Related
A few weeks back I was faced with a challenge that was basically to use webscraping to get all the files of a GitHub repository and group them by extension and sum the size for those files of that particular extension. The important thing about this is that we SHOULD NOT use Github's API nor any webscraping tool.
My solution was to get the main HTML page as a string, apply a regex to extract all URLs that had <repo_owner>/<repo_name>/blob and <repo_owner>/<repo_name>/tree. From the blob URLs, we could make another request and apply another regex to extract the filesize and lines, for the URLs of the other type we'd make another request to extract more blob URLs. I did this until there are no URLs of the latter.
It solved the problem, but it was a pretty bad solution because we need to make too many requests to GitHub and we're always blocked at some point while analyzing a repository. I applied a delay between requests but it takes a "LOOOT" of time to process one repository, also, if we make like 10 requests simultaneously we'd still have too many requests problem.
Until today this bothers me because I couldn't find a better solution for this. As the challenge isn't valid anymore I'd like to know someone else's ideas of how this could be solved!
I'm trying to determine the best way to determine if a URL (as seen in the global.asax) is for an action. I'm wanting to exclude EVERYTHING else...i.e. a request to a bundle would fail as well as a request for a file.
It seems clunky and dirty to check to make sure the request isn't for a file/directory/bundle/etc. I'm hoping to be able to instead JUST check to see if it's an action, but I'm having issues coming up with what that test would look like.
Just FYI, in case it's relevant. I'm working on some internationalization of a site and I'm needing to filter the Request objects so that I only fiddle with the one for the initial request.
I'm designing a REST API that supports HTTP GET parameters. In most cases I only accept one value for a parameter. But how should I handle duplicate parameters?
For example, Stack Overflow accepts a GET param tab:
http://stackoverflow.com/?tab=hot
http://stackoverflow.com/?tab=featured
Duplicate parameters are allowed, passing both values is correct:
http://stackoverflow.com/?tab=hot&tab=featured
What should I do? Just go with the first value, thus silently ignoring other values (what SO does) or return an error stating only one value is allowed? In the latter case, what error should I return with what status code (409 Conflict, perhaps)?
I agree with VKSingla that this is a design decision, therefore there is no correct answer only opinions in this matter.
If you ask me I would make a 'strict' API and just throw an error (I would make sure that it is a clear error and not just a random code which doesn't help the user). I prefer this strict approach because if usercode is adding the same param twice it will likely be a bug somewhere in the users code. Revealing this bug as early as possible helps the user finding the bug asap.
If you choose to go with ignoring the other parameters then make sure that the user knows this behavior. For example document 'all duplicate parameters after the first will be ignored'. Undocumented 'magic behavior' like this can make code pretty damn hard to debug.
This is a design decision. Its your API design on how you want it to function.
If you choose to ignore any one , then the question is which one ?
So, it is simply a conflict. or else
your API can respond with combined data, but the request for that should be like this
https://stackoverflow.com/?tab=hot,featured
Also refer this question Extra Query parameters in the REST API Url
Especially for the majority of browsers that don't support it, is there anything aside from just strict standards compliance that justifies the extra development time?
If you develop your web application only for browser, you should go with post and get.
But e.g. REST-APIs should/could make use of the put and delete methods. So you could better define what action you want to execute on special resources. http://en.wikipedia.org/wiki/Representational_State_Transfer
There's a pretty interesting article on this very subject here: http://www.artima.com/lejava/articles/why_put_and_delete.html
A slight extract:
PUT and DELETE are in the middle between GET and POST. The difference between PUT or DELETE and POST is that PUT and DELETE are idempotent, whereas POST is not. PUT and DELETE can be repeated if necessary. Let's say you're trying to upload a new page to a site. Say you want to create a new page at http://www.example.com/foo.html, so you type your content and you PUT it at that URL. The server creates that page at that URL that you supply. Now, let's suppose for some reason your network connection goes down. You aren't sure, did the request get through or not? Maybe the network is slow. Maybe there was a proxy server problem. So it's perfectly OK to try it again, or again—as many times as you like. Because PUTTING the same document to the same URL ten times won't be any different than putting it once. The same is true for DELETE. You can DELETE something ten times, and that's the same as deleting it once.
We got a long-running website where XSS lurks. The problem comes from that some developers directly - without using HtmlEncode/Decode() - retrieve Request["sth"] to do the process, putting on the web.
I wonder if there is any mechanism like HTTPModule to help us HtmlEncode() all the items in a Http request to avoid XSS to some extent.
Appreciate for any suggestion.
Rgds,
Ricky
The problem is not retrieving Request data without HTML-encoding. In fact that's perfectly correct. You should not encode any text until the final output stage when you spit it into an HTML page.
Trying to blanket-encode incoming parameters, whether that's HTML-encoding or SQL-encoding, is totally the wrong thing. It may hide XSS holes in your app but it does not fix them. You will still have a hole if you output content that hasn't come from parameters, or has been processed since then. Meanwhile the automatic encoding will fill your database with multiply-escaped &amp;amp;amp;amp;amp; crud.
You need to fix the output stage, that's where the problem lies.
Like bobince said, this is an output problem, not an input problem. If you can isolate where this data is being output on the page, you could create a Filter and add it to the Response object. This filter would isolate the areas that are common output and then HtmlEncode them.