I'm sending GET requests to Trello API, and I need to test my error handling script. Can I send something that will reliably provoke server-side error?
If all you need to do is have a server generate HTTP error code replies, and you are fine with not using the real Trello API server (but can switch to another host), you could give HttpBin a try. It's a free, anonymous, hosted service that just replies with whatever status you ask it for.
Related
It might be a dummy question on security.
I'm working on a file access service, trying to do authentication for it.
For example, I should be able to know who's the request issuer, and whether he/she has the permission to access the file he/she requests.
I'm using grpc to do communication, and grpc seems to natively support a bunch of authentication methods. I tried one of them (https://grpc.io/docs/guides/auth/#using-google-token-based-authentication). The client side code is exactly the same as in the doc. I'm using C++.
But I get the error
E0812 19:03:32.173663955 3576491 ssl_transport_security.cc:1509] Handshake failed with fatal error SSL_ERROR_SSL: error:100000f7:SSL routines:OPENSSL_internal:WRONG_VERSION_NUMBER.
I'm wondering what else should I do?
I have an ASP MVC3 website with a rest API service.
When a user passes in an invalid API or they have been blacklisted i wish to ignore the response.
I know I could send back a 404 or pass back an 503 but if someone keeps polling me then I would ideally like to ignore the response causing a time-out their end. Thus delaying the hammering my server gets.
Is this possible within ASP.net MVC3? If so any help would be most appreciated.
Thank you
For what you want, you still need to parse the request, so it will always consume server resources, specially if you have an annoying user sending a query every 500ms...
In this situations you would block the IP / Header of the request for a period of, for example 10 minutes, but it would be a very good idea to block it on your load balancer and prevent that request that even reach your application, this is easily accomplish if you're using Amazon Services to run your Service, but all other cloud provider do support this as well, if by any means you are using a cloud hosting.
if you can only use your web application, and this is a solution that is not tested, you could add an ignored route to your routing mechanism like:
routes.IgnoreRoute("{*allignore}", new {allignore=#".*\.ignore(/.*)?"});
and upon check that the IP is banned, simple redirect using for example Response.Redirect() to your site, to a .ignore path... or, why not redirecting that request to google.com just for the fun of it?
I have My Sharepoint Web Service URL, but when i try to access it i get HTTP Request Error. My Share point Webservice has credentials in it. I have also set that in Flex.
webService.setRemoteCredentials("CITMOSS\Administrator","Pa$$w0rd");
When i trace the url, my console does read the XML, but i am unable to call the method. It throws me HTTP Request Error.
Any Possible solutions on this.
Thanks
If you access that web service manually from the same machine does it work?
Is there a way you could put something like Fiddler in the way to trace the network traffic and find out what is going on?
There are loads of things that could be causing a problem here, in Flex, in the web service and absolutely everywhere in between. Checking your event logs and sharepoint logs might also be informative, but if those yeild nothing then looking at the HTTP traffic will probably be most useful.
I have a asp.net 3.5 site with a *.asmx that serves several webservice methods. The only client that should be calling these methods is one I wrote, and it calls them using a POST request. However, my error logs show many InvalidOperationException errors due to these methods being called with a GET request.
Question: What might be causing these GET requests? Might proxies convert POST requests to GET requests without the client making the request knowing about it?
to expand on rusanu's answer, bots and crawlers and/or hackers?
bots and crawlers?
It's always possible there is a bug in your client app. Why not get hold of an HTTP sniffer so you can see exactly what requests are being sent.
When analyzing traffic with a packet sniffer, we are seeing an http response from a weblogic server prior to the completion of the http post to that server.
In this case, the jsp page on the server is basically a static page, no logic to do anything with the contents of the post at this time.
But why would the server send the response prior to completion of the post?
I found Weblogic documentation about how to configure the server to ignore a denial-of-service attack using Http post. Maybe that is what is happening?
No one I know has seen this behaviour before. Maybe some weblogic-savvy person will know what is going on.
Thanks
I don't think that Weblogic is analyzing the JSP to determine whether it is static or not.
My guess is that either
someone else was accessing the server at the same time
you saw the answer to a previous request
[EDIT] To determine what is going on, I suggest to set a breakpoint in the JSP. If you still get an answer without hitting the breakpoint, something further up the stack must be intercepting the request (for example, a cache).