Testing DELETE using spring-test-mvc - spring-mvc

I am using Spring MVC to create RESTful endpoints. I am using spring-test-mvc to test them at the unit/integration test level. I am now coming across this team's first attempt at implementing an endpoint using DELETE. This means the container needs to be setup to allow for DELETE (PUT will come shortly after). My research took me here:
http://www.codereye.com/2010/12/configure-tomcat-to-accept-http-put.html
I am technically using JBoss, but I have a feeling a Tomcat write-up will do just fine. Anyway, my problem is not at the container level.
I am trying to create a unit test to verify the most basic of 404. Let's say you try to delete a user calling /users/{id}. My test passes an invalid id, and I expect a 404 to return. It gives a 405. This makes sense when DELETE is not supported. Following the instructions in the link above, I should add some entries to the web.xml. I did so in main and test. Both still gave me the 405.
How would I setup spring-test-mvc to grab these new http-method types out of the web.xml or some other location? My research hasn't come up with anything other than DELETE isn't initially supported.
Thanks
Dustin

Spring-test-mvc does support DELETE(and PUT), I have used it with a DELETE based method, it is true that you need to add HiddenHttpMethodFilter filter in web.xml for DELETE http method to work within your application, however spring-test-mvc does not look at the filter, it works from DispatcherServlet down, here is one of the samples that works for me:
mockMvc.perform(delete("/members/1").contentType(MediaType.APPLICATION_JSON))
.andExpect(status().isOk());
The error you are seeing I feel could be more related to the content-type or accept headers, that is where I have seen 405 being returned, you may be able to change your log level to debug or trace and see what else shows up.

Related

URL manipulation always returns a 200:OK in meteor - getting flagged as violation in OWASP-ZAP

I ran OWASP ZAP and the tool threw up a high vulnerability for possible SQL injection issue. Although we know for sure we do not use any sql databases as part of our application stack, I poked around and have have a few questions.
The payload that detected this “vulnerability” was as below:
https://demo.meteor.app/sockjs/info?cb=n6_udji55a+AND+7843%3D8180--+UFVTsdsds
Running this on the browser, I get a response:
{"websocket":true,"origins":["*:*"],"cookie_needed":false,"entropy":3440653497}
I am able to go ahead and make any sort of manipulations to what comes after the cb= part and I still get the same response. I believe this is what has tricked the tool to flag this as vulnerability - where in it injected a -- with some characters and still managed to get a proper response.
How can I make sure that changing the URL parameter to something that does not exist, returns a 404 or a forbidden message?
Along the same lines, when I try to do a GET (or simply a browser call) for:
https://demo.meteor.app/packages/accounts-base.js?hash=13rhofnjarehwofnje
I get the auto generated JS file for accounts-base.js.
If I manipulate the hash= value, I still get the same accounts-base.js file rendered. Shouldn’t it render a 404? If not, what role does the hash play? I feel that the vulnerability testing tool is flagging such URL based manipulations wrongly and ascertaining that there is some vulnerability with the application.
Summarizing my question:
How do I make sure that manipulating the URL gives me a 404 or at the very least, forbidden message instead of always giving a 200:ok in a meteor application?

Initial Traces created by Spring-Cloud-Gateway are all named "/", no matter the path

I've integrated sleuth into my application gateway and the services behind it. The traces in Stackdriver (GKE) look good but the root-span is always named "/". For example:
The second span is also created by the gateway and has a much better name.
How can i configure sleuth in my gateway-service to use a different naming or fix whatever causes two spans?
EDIT1:
I created a minimal project with spring-gateway, sleuth and gcp and wrote a LoggingReporter to print all reported spans while having GCP auto-config working.
StackdriverHttpClientParser names spans based by the request uri. The second span is created by the TraceWebFilter based on a request with the full uri. the first span is created by the HttpClientBeanPostProcessor based on the uri "/".
I don't think this is a gcp issue. it is probably a problem with spring-gateway. Interestingly the TraceWebFilter span is created first, but the PostProcessor one is still the parent.
EDIT2: I created an issue in spring sleuth https://github.com/spring-cloud/spring-cloud-sleuth/issues/1535
I'm agreed with comment made by Marcin, the problem could be on Stackdriver and you can validate this by running a trace in your environment (offline) and also by be assured that the x-cloud-trace-context: TRACE_ID/SPAN_ID is formatted correctly, as per what I have seen there are three ways to do it and are mentioned here.
If the trace results successful by running it offline without changing anything then the problem is with stackdriver.

DirectApiAuthorizationRequired with Microsoft Flow calling Microsoft Flow

I'm attempting to incorporate subroutines in Microsoft Flow, which seems to be done by creating a flow called via HTTP by another Flow per posts online. Creating a simple flow that I can call from Postman works great. The problem occurs when I call it from my main flow.
It wanted an API version, so I set the query api-version to 2016-10-01
Now, when it runs, it gives the error
"code": "DirectApiAuthorizationRequired",
"message": "The request must be authenticated only by Shared Access scheme."
Again, the called flow works fine from Postman. It's when called from Flow that it gives the error. All the steps I see online are for Logic App or other tools. Suggestions?
I discovered that when I was recopying the URL, that I had lost the authentication information has it had been moved to Queries in my REST client, so the code was not actually authenticating. So, if anyone else has this issue, copy the URL from the original source!

AWS Lambda, Caching {proxy+}

Simple ASP.Net AWS Lambda is uploaded and functioning with several gets like:
{proxy+}
api/foo/bar?filter=value
api/foo/barlist?limit=value
with paths tested in Postman as:
//#####.execute-api.us-west-2.amazonaws.com/Prod/{proxy+}
Now want to enable API caching but when doing so only the first api call gets cached and all other calls now incorrectly return the first cached value.
ie //#####.execute-api.us-west-2.amazonaws.com/Prod/api/foo/bar?filter=value == //#####.execute-api.us-west-2.amazonaws.com/Prod/api/foo/barlist?limit=value; In terms of the cache these are return the same but shouldn't be.
How do you setup the caching in APIGateway to correctly see these as different requests per both path and query?
I believe you can't use {proxy+} because that is a resource/integration itself and that is where the caching is getting applied. Or you can (because you can cache any integration), but you get the result you're getting.
Note: I'll use the word "resource" a lot because I think of each item in API Gateway as the item in question, but I believe technically AWS documentation will say "integration" because it's not just the resource but the actual integration on said resource...And said resource has an integration and parameters or what I'll go on to call query string parameters. Apologies to the terminology police.
Put another way, if you had two resources: GET foo/bar and GET foo/barlist then you'd be able to set caching on either or both. It is at this resource based level that caching exists (don't think so much as the final URL path, but the actual resource configured in API Gateway). It doesn't know to break {proxy+} out into an unlimited number of paths unfortunately. Actually it's method plus resource. So I believe you could have different cached results for GET /path and POST /path.
However. You can also choose the integration parameters as cache keys. This would mean that ?filter=value and ?limit=value would be two different cache keys with two different cached responses.
Should foo/bar and foo/barlist have the same query string parameters (and you're still using {proxy+}) then you'll run into that duplicate issue again.
So you may wish to do foo?action=bar&filter=value and foo?action=barlist&filter=value in that case.
You'll need to configure this of course, for each query string parameter. So that may also start to diminish the ease of {proxy+} catch all. Terraform.io is your friend.
This is something I wish was a bit more automatic/smarter as well. I use {proxy+} a lot and it really creates challenges for using their caching.

Http/REST method for starting a service

I want to design a REST API to start a database. I can't find a suitable http method (aka verb).
I currently consider:
START /databases/mysampledatabase
I've browsed through a few RFCs, but then I thought someone here might point me to a de-facto standard verb.
Methods I've discarded (before I got tired of looking):
RFC 2616
OPTIONS
GET
HEAD
POST
PUT
DELETE
TRACE
CONNECT
RFC 2518
PROPFIND
PROPPATCH
MKCOL
COPY
MOVE
LOCK
UNLOCK
RFC 3253
REPORT
CHECKOUT
CHECKIN
UNCHECKOUT
MKWORKSPACE
UPDATE
LABEL
MERGE
BASELINE-CONTROL
MKACTIVITY
There's a bunch of thinking flaws here.. first off, the additional HTTP verbs (aside from the CRUD ones) should be considered not-restful.
So there's two ways I can interpret this question, and I have an answer for both:
1. What's the most appropriate HTTP method for starting a service
There's nothing quite like what you need, and I would advise simply using POST.
2. What's a good RESTful way to start a service
First, you should not see 'starting the service' as the action. It's easier to think of the 'status' (being started or stopped) as the resource you are changing, and PUT to update the resource.
So in this case, each service should have a unique uri. A GET on that uri could return something like :
{ "status" : "stopped" }
You just change 'stopped' to 'started', PUT the new resource.. and then the service could automatically begin running.
I wonder how useful this is though.. I'm not a REST zealot, and I think a simple POST is the best way to go.
edit I can't delete accepted answers, but since 2013 my thoughts on what is and isn't RESTful has nuanced quite a bit. I still think my example to represent the changable state of each service as a property still holds.

Resources