I have a spring-mvc web application which is "Active scanned" by ZAP tool. It has two High medium alert for SQL Injection which I believe is a false positive.
The original URL is /msg/showList? which returns 200OK and json list of message. ZAP while running scan, adds a parameter /deliverymsg/showList?query=query+AND+1%3D1+--+ which also returns 200 OK and json list of message. There is no change in response, the list is same. The application doesn't read the param "query" added by ZAP, so there is no actual SQL Injection here.
But ZAP alerts this as HIGH Medium.
The original URL is /filterlist?fromDate=&toDate=&_csrf=1534403682524 that returns 200Ok and list , ZAP is scanning and adds the condition for SQL injection /filterlist?fromDate=&toDate=&_csrf=1534403682524+AND+1%3D1+--+
This parameter is csrf token added by spring-boot implicitly, again not actually read by application directly. But ZAP alerts this as High Medium again.
I want to understand how ZAP is working here, and how to fix this false positive so ZAP doesn't alert. I have had cases in past, where I changed the application code to pass the test. But unable to think a clear solution for this.
Am thinking of adding HandlerInterceptor and check request param for any "AND" word and return HTTP400, I do not want to do it, coz it will be just to fool ZAP in not alerting HIGH.
I understand I can also give evidence of false positive and release this without fixing, but I cannot do that due to internal policies.
Am running ZAP in whatever default configuration it comes with.
Update:
I added HandlerInterceptor and rejected request having AND or query and so far re-running the ZAP for only the reported URL's didnt produce any alert. I wonder why is that? Because the URL's created by ZAP to attack has many more sql keywords like UNION ALL etc. I have only rejected request for two keywords.How could that solve the problem?
Why does ZAP appends it's own parameter query which application will never read, I dont understand the logic behind attacking with query.
It is possible that the findings are false positives. As you can imagine accounting for slight variances in app behavior across billions of potential web implementations is not exactly straight forward. Based on the details provided here it's hard to say. Usually for the SQLi alerts there is additional info in the "Other Info" section of the alert that may provide further clues as to ZAP's test and the observed "weirdness".
You can mark Alerts as False Positive in the UI (https://github.com/zaproxy/zap-core-help/wiki/HelpUiDialogsAddalert#confidence) or via the Web API. You can also install the "Context Alert Filters" addon and create rules in your context to set these as False Positive in the future. (Assuming you export/import your context.) [Further details on Context Alert Filters here > https://github.com/zaproxy/zap-extensions/wiki/HelpAddonsAlertFiltersAlertFilter]
ZAP's code is open source and publicly available so you can always look at the SQLi scanner (https://github.com/zaproxy/zap-extensions/blob/master/src/org/zaproxy/zap/extension/ascanrules/TestSQLInjection.java) and if you see an issue submit a PR with a fix, or open a new issue for the team: https://github.com/zaproxy/zaproxy/issues
I added HandlerInterceptor and rejected request having AND or query and so far re-running the ZAP for only the reported URL's didnt produce any alert. I wonder why is that? Because the URL's created by ZAP to attack has many more sql keywords like UNION ALL etc. I have only rejected request for two keywords.How could that solve the problem?
Well the alerts you received didn't have anything to do with the UNION injections did they? So simply preventing the issue that was being alerted upon (by filtering AND or query) you've hidden the behavior from ZAPs analysis. Further, the detection mechanism for boolean based SQLi and union based SQLi is different.
When you run an active scan that includes the Input Vector "URL Query String & Data Driven Nodes" then any request without a parameter will also be tried with the query parameter as that may sometimes uncover unknown handling (or mis-handling) within the web app, as well as some DOM XSS, etc. that may otherwise go undetected.
Related
I have .Net core App deployed on azure and enabled application insights.
Sometimes Azure application insights End-to-end transaction details do not display all telemetry.
Here it only logs the error and not request or maybe request logged but both do not display together over here(difficult to find out due to many people use it)
Should be like:
Sometimes request log but with no error log.
What could be the reason for happening this? do I need to look into application insights specific set-up/feature?
Edit:
As suggested by people here, try to disable the Sampling feature but still not works, Here is open question as well.
This usually happens due to sampling. By default, adaptive sampling is enabled in the ApplicationInsights.config which basically means that only a certain percentage of each telemetry item type (Event, Request, Dependency, Exception, etc.) is sent to Application insights. In your example probably one part of the end to end transaction got sent to the server, another part got sampled out. If you want, you can turn off sampling for specific types, or completely remove the
AdaptiveSamplingTelemetryProcessor
from the config which completely disables sampling. Bear in mind that this leads to higher ingestion traffic and higher costs.
You can also configure sampling in the code itself, if you prefer.
Please find here a good overview of how sampling works and can be configured.
This may be related to :
When using SDK 2.x, you have to track all events and send the telemetries to Application insights
When using auto-instrumentation with 3.x agent, in this case the agent collect automatically the traffic, logs ... and you have to pay attention to the sampling file applicationinsights.json where you can filter the events.
If you are using java, below the accepted Logging libraries :
-java.util.logging
-Log4j, which includes MDC properties
-SLF4J/Logback, which includes MDC properties
I'm wondering if anyone else has had this issue with Azure Front Door and the Azure Web Application Firewall and has a solution.
The WAF is blocking simple GET requests to our ASP.NET web application. The rule that is being triggered is DefaultRuleSet-1.0-SQLI-942440 SQL Comment Sequence Detected.
The only place that I can find an sql comment sequence is in the .AspNet.ApplicationCookie as per this truncated example: RZI5CL3Uk8cJjmX3B8S-q0ou--OO--bctU5sx8FhazvyvfAH7wH. If I remove the 2 dashes '--' in the cookie value, the request successfully gets through the firewall. As soon as I add them back the request gets blocked by the same firewall rule.
It seems that I have 2 options. Disable the rule (or change it from Block to Log) which I don't want to do, or change the .AspNet.ApplicationCookie value to ensure that it does not contain any text that would trigger a firewall rule. The cookie is generated by the Microsoft.Owin.Security.Cookies library and I'm not sure if I can change how it is generated.
I ran into same problem as well.
If you have a look to the cookie value: RZI5CL3Uk8cJjmX3B8S-q0ou--OO--bctU5sx8FhazvyvfAH7wH there are two -- which is the potentially dangerous SQL command that can comment out your SQL command that you're going to query. An attacker may run their command instead of your command - after commenting out your query.
But, obviously, this cookie won't run any query on the SQL side and we are sure about that. So we can create rule exclusions that won't run specific conditions.
Go to your WAF > Click Managed Rules on the left blade > Click manage exclusions on the top > and click add
In your case, adding this rule would be fine:
Match variable: Request cookie name
Operator: Starts With
Selector: .AspNet.ApplicationCookie
However, I use Asp.Net Core 3.1 and I use Asp.Net Core Identity. I encountered other issues as well, such as __RequestVerificationToken.
Here is my full list of exclusions. I hope it helps.
PS I think there is a glitch at the moment. If you have an IP restriction on your environment, such as UAT, because of these exclusions Web Application Firewall is by-passing the IP restriction and your UAT site becomes open to the public even if you have still custom IP restriction rule on your WAF.
I ran into something similar and blogged about it here: Front Door incomplete first request.
To test this I created a web application and put it behind the Front Door service. In that test application I iterate over all the properties of the HttpContext.HttpRequest and print them out. As far as I can see right now, there are two properties that have differences between a direct request and a request through Front Door. Both the AcceptTypes and the UserLanguages property are empty for Front Door requests, while they are absolutely filled in when directly accessing the test application.
I’m not quite sure what the reason is for the first Front Door request to be different from a direct request. Is it a bug? Is it intentional and if so, why? Or is it because Front Door is developed using a framework that doesn’t support these properties, having them be empty when being forwarded?
Unfortunately I didn't find a solution to the issue, but to answer the question if anyone else is experiencing this: I did experience something similar.
Seems that the cookie got corrupted , as I was comparing the fields that existed before vs a healthy cookie, my guess is maybe somewhere in the content of the field it is being interpreted as a truncate sql statement and probably triggering the rule. Still to determine if this is true and/or what cause it.
I ran into this issue but the token was being passed through via the request query rather than via a cookie. In case it might help someone, for the specified host I had to allow via a custom rule doing a regex match on the RequestUri, using the following regex (taken from the original managed rule):
:\/\\\\*!?|\\\\*\/|[';]--|--[\\\\s\\\\r\\\\n\\\\v\\\\f]|--[^-]*?-|[^\\u0026-]#.*?[\\\\s\\\\r\\\\n\\\\v\\\\f]|;?\\\\x00
I have some webservices which are called by some clients and that includes through mobile and web. I have no control on the clients code.
But, I need to identify who is calling my web services, via the IP address or something else.
Is there any way to identify that?
A better approach to tracking this sort of thing is to introduce the notion of an API key. That way you know exactly who is using your service and you can track their usage etc.
On every call to your service the user would have to provide their key as a means of authorisation (not authentication). This sort of approach can generally help avoid misuse of an API, however, it can't eradicate it completely. At least with this approach if you do find malicious user it's as simple as disabling that particular API key.
You should check your IIS Logs, these will list (if you have them turned on, default they are on) all the requests made to your server.
So search through the log for the URL of the service and check the logs around the time of requests you are having issues with and it will list the IP address.
Your logs can generally be found at: C:\inetpub\logs\LogFiles
If the folder is empty then you are out of luck currently, you will need to turn logging on in IIS and then you will be able to check them after a few hours and start seeing where requests are coming from.
E.g a sample from a log.
2012-10-29 04:49:44 129.35.250.132 GET /favicon.ico/sign-in returnUrl=%252ffavicon.ico 82 - 27.x.x.x Mozilla/5.0+(Windows+NT+6.1;+rv:16.0)+Gecko/20100101+Firefox/16.0 200 0 0 514
So the first highlighted item is the date and time, and the second highlighted item is the IP address (redacted as it's a real log.)
OK, I know already all the reasons on paper why I should not use a HTTP GET when making a RESTful call to update the state of something on the server. Thus returning possibly different data each time. And I know this is wrong for the following 'on paper' reasons:
HTTP GET calls should be idempotent
N > 0 calls should always GET the same data back
Violates HTTP spec
HTTP GET call is typically read-only
And I am sure there are more reasons. But I need a concrete simple example for justification other than "Well, that violates the HTTP Spec!". ...or at least I am hoping for one. I have also already read the following which are more along the lines of the list above: Does it violate the RESTful when I write stuff to the server on a GET call? &
HTTP POST with URL query parameters -- good idea or not?
For example, can someone justify the above and why it is wrong/bad practice/incorrect to use a HTTP GET say with the following RESTful call
"MyRESTService/GetCurrentRecords?UpdateRecordID=5&AddToTotalAmount=10"
I know it's wrong, but hopefully it will help provide an example to answer my original question. So the above would update recordID = 5 with AddToTotalAmount = 10 and then return the updated records. I know a POST should be used, but let's say I did use a GET.
How exactly and to answer my question does or can this cause an actual problem? Other than all the violations from the above bullet list, how can using a HTTP GET to do the above cause some real issue? Too many times I come into a scenario where I can justify things with "Because the doc said so", but I need justification and a better understanding on this one.
Thanks!
The practical case where you will have a problem is that the HTTP GET is often retried in the event of a failure by the HTTP implementation. So you can in real life get situations where the same GET is received multiple times by the server. If your update is idempotent (which yours is), then there will be no problem, but if it's not idempotent (like adding some value to an amount for example), then you could get multiple (undesired) updates.
HTTP POST is never retried, so you would never have this problem.
If some form of search engine spiders your site it could change your data unintentionally.
This happened in the past with Google's Desktop Search that caused people to lose data because people had implemented delete operations as GETs.
Here is an important reason that GETs should be idempotent and not be used for updating state on the server in regards to Cross Site Request Forgery Attacks. From the book: Professional ASP.NET MVC 3
Idempotent GETs
Big word, for sure — but it’s a simple concept. If an
operation is idempotent, it can be executed multiple times without
changing the result. In general, a good rule of thumb is that you can
prevent a whole class of CSRF attacks by only changing things in your
DB or on your site by using POST. This means Registration, Logout,
Login, and so forth. At the very least, this limits the confused
deputy attacks somewhat.
One more problem is there. If GET method is used , data is sent in the URL itself . In web server's logs , this data gets saved somewhere in the server along with the request path. Now suppose that if someone has access to/reads those log files , your data (can be user id , passwords , key words , tokens etc. ) gets revealed . This is dangerous and has to be taken care of .
In server's log file, headers and body are not logged but request path is . So , in POST method where data is sent in body, not in request path, your data remains safe .
i think that reading this resource:
http://www.servicedesignpatterns.com/WebServiceAPIStyles could be helpful to you to make difference between message API and resource api ?
I need to design a bug alert system, where the web support team is notified via email when a user of our website encounters an error of any sort (database exception, or a 404)
What would be the best way to design this section of the project? Any ideas would be appreciated.
You may want to look into using the global.asax file for application-wide error intercepting. A quick search yields this step-by-step walk-through:
http://aspnetresources.com/articles/CustomErrorPages.aspx
Depending on the volume of traffic you're expecting, sending an e-mail every time an error is intercepted may not be the best approach. At best, you'd flood inboxes (and make the support staff very unhappy), and at worst you'd get your mail servers blacklisted for spamming. The approach that I've used in the past on high-traffic sites is to queue up errors in a table that is read and purged at a set interval by a separate process. The process would aggregate the errors, grouping them by type, number of occurrences, etc, then send out an e-mail report to the support mailing lists.
ASP.NET health monitoring may be of interest: http://msdn.microsoft.com/en-us/library/ms998306.aspx. It's really simpler to use than this article first appears and doesn't require any additional components - it's all built-in.
I would implement an HTTPmodule that captures the onError event.
This is would allow the module to be reused over multiple applications. The destination email addresses, SMTP server etc, could be in the HTTPmodule, overriden in the web.config file for maximum flexibility.