modsecurity: Is turning off the rule engine really necessary when implementing a whitelisting rule? - whitelist

Virtually all SecRule examples for modsecurity whitelisting I found on the web include turning off the rule engine, example:
phase:1,nolog,allow,ctl:ruleEngine=Off,id:23023
However, as far as I got it from the documentation, "nolog" combined with "allow" should have exactly the same effect already - namely disrupting rule processing and preventing any log entries. Hence, wouldn't the following configuration be absolutely equivalent?
phase:1,nolog,allow,id:23023
If I am wrong, where's the difference between the two?
I am using modsecurity 2.9.3.

I’ve not seen that before but I can take a guess why it’s there.
The allow action is a disruptive action. When ModSecurity is working in DetectionOnly mode then disruptive actions (including allow) are not actually actioned. This means any subsequent rules are still run - even though they would not be run when running in the normal On mode. This can make the logs very noisy, and also make you think you have to tune more rules which you do not actually have to tune.
The ctl action is not disruptive and so does execute even in DetectionOnly mode. Therefore by adding ctl:ruleEngine=Off to any allow rules you can have only the real errors logging in Detection only mode.
I’ve actually done the opposite and used ctl:ruleEngine=On to make the allow action take effect even in DetectionOnly mode. For example I have a rule, near the beginning of all my rules, to look at GET calls with no params to index.html pages and say they are reasonably safe so no need to run rest of rules on them. This saves processing time and false flags.

Related

Security Audit Issue [For Asp.Net WebForms] : Source code disclosed

After the security audit of Asp.Net Application I have received a error report and one error is as Source Code Disclosed.
How Should I resolve this issue by preventing any person to view code?
This is javascript code, which is really common to be exposed/disclosed (*) simply because it is intended to be downloaded to the browser where it then runs. To label this a risk might seem abundant, although there could be some risk depending on what it is that you put in it.
The question is mainly: could this code be exploited, or could it be altered into something that is dangerous?
The answer is to not put secrets in it, and also to never rely on client-side-only logic and validation. Always have a server side equivalent that enforces whatever rules need to be enforced, and use SSL/https so the connection is secure, and then you should be good.
(*) just hit F12, go to tab Sources or Debugger, and you'll see it here as well

Azure Front Door WAF is blocking .AspNet.ApplicationCookie

I'm wondering if anyone else has had this issue with Azure Front Door and the Azure Web Application Firewall and has a solution.
The WAF is blocking simple GET requests to our ASP.NET web application. The rule that is being triggered is DefaultRuleSet-1.0-SQLI-942440 SQL Comment Sequence Detected.
The only place that I can find an sql comment sequence is in the .AspNet.ApplicationCookie as per this truncated example: RZI5CL3Uk8cJjmX3B8S-q0ou--OO--bctU5sx8FhazvyvfAH7wH. If I remove the 2 dashes '--' in the cookie value, the request successfully gets through the firewall. As soon as I add them back the request gets blocked by the same firewall rule.
It seems that I have 2 options. Disable the rule (or change it from Block to Log) which I don't want to do, or change the .AspNet.ApplicationCookie value to ensure that it does not contain any text that would trigger a firewall rule. The cookie is generated by the Microsoft.Owin.Security.Cookies library and I'm not sure if I can change how it is generated.
I ran into same problem as well.
If you have a look to the cookie value: RZI5CL3Uk8cJjmX3B8S-q0ou--OO--bctU5sx8FhazvyvfAH7wH there are two -- which is the potentially dangerous SQL command that can comment out your SQL command that you're going to query. An attacker may run their command instead of your command - after commenting out your query.
But, obviously, this cookie won't run any query on the SQL side and we are sure about that. So we can create rule exclusions that won't run specific conditions.
Go to your WAF > Click Managed Rules on the left blade > Click manage exclusions on the top > and click add
In your case, adding this rule would be fine:
Match variable: Request cookie name
Operator: Starts With
Selector: .AspNet.ApplicationCookie
However, I use Asp.Net Core 3.1 and I use Asp.Net Core Identity. I encountered other issues as well, such as __RequestVerificationToken.
Here is my full list of exclusions. I hope it helps.
PS I think there is a glitch at the moment. If you have an IP restriction on your environment, such as UAT, because of these exclusions Web Application Firewall is by-passing the IP restriction and your UAT site becomes open to the public even if you have still custom IP restriction rule on your WAF.
I ran into something similar and blogged about it here: Front Door incomplete first request.
To test this I created a web application and put it behind the Front Door service. In that test application I iterate over all the properties of the HttpContext.HttpRequest and print them out. As far as I can see right now, there are two properties that have differences between a direct request and a request through Front Door. Both the AcceptTypes and the UserLanguages property are empty for Front Door requests, while they are absolutely filled in when directly accessing the test application.
I’m not quite sure what the reason is for the first Front Door request to be different from a direct request. Is it a bug? Is it intentional and if so, why? Or is it because Front Door is developed using a framework that doesn’t support these properties, having them be empty when being forwarded?
Unfortunately I didn't find a solution to the issue, but to answer the question if anyone else is experiencing this: I did experience something similar.
Seems that the cookie got corrupted , as I was comparing the fields that existed before vs a healthy cookie, my guess is maybe somewhere in the content of the field it is being interpreted as a truncate sql statement and probably triggering the rule. Still to determine if this is true and/or what cause it.
I ran into this issue but the token was being passed through via the request query rather than via a cookie. In case it might help someone, for the specified host I had to allow via a custom rule doing a regex match on the RequestUri, using the following regex (taken from the original managed rule):
:\/\\\\*!?|\\\\*\/|[';]--|--[\\\\s\\\\r\\\\n\\\\v\\\\f]|--[^-]*?-|[^\\u0026-]#.*?[\\\\s\\\\r\\\\n\\\\v\\\\f]|;?\\\\x00

Apply proxy rules to only one usergroup

I am attempting to apply an ACL ruleset to members of a specific usergroup on a Linux box running Squid that I administer.
I have created the ruleset without much difficulty, but I am having difficulty configuring an authentication scheme that will only apply those rules to a specific subset of users on the system, while leaving the remainder of traffic untouched.
It seems that the auth_param setting is what I am looking for, but I haven't had much luck parsing the documentation.
Ideally, I would like an auth_param setting that sends the username to a shell script, which would check for that user's existence in the relevant group, and then return some value to determine whether or not to apply the rules to them.
The documentation seems to suggest that such a mechanism would be possible, but I haven't been able to find any relevant examples.

Nginx rewrite and (later) load balancer together: is that possible?

I've an old site based on IIS that, for historical reasons, was using lots of RewriteRule via Helicon APE. Now, when we hit the server with multiple clients, Helicon APE frequently crashes (quite frequently, actually). The entire set of IIS servers (4) are expected to grow and the entire system to scale, and a lot of effort was done recently in the webapp to support new features and user growth.
Someone suggested to use NGINX as a load balancer before the IIS servers, as it will handle way better the increasing amount of traffic, and apply those rewrites before hitting IIS, so the URLs would be converted to the new formats before load balancing them.
Following the advice, we have set one POC nginx 1.13 on linux with rewrite rules (from the ones used in APE) and using proxy_pass with two of the servers. But we have noticed several issues this way:
rewrite rules seems to NOT work the way they should; we can check that the regex's are certainly valid (putting them in locations), but the URL seems to be not rewritten.
ProxyPass returns usually a 400 bad request or does not hit the servers.
However, if we set several locations with some of the simpler regexs, and then we put inside ProxyPass to the backend server and the new URL patterns, the servers are hit with the right requests. This solution, however, brings some problems: some of our rewrites are additions to anothers, so the transformations could be done in 3 steps (one changes first part of the rule, the second changes another, and the third will join all together to put the valid url with a break flag). This is impossible to be done mixing locations.
A lot of research through StackOverflow, blogs, support sites and mailings lists has been put in place to find a solution to our problem, but sometimes the suggested solution does not work at all (or partially), and to be honest, after a week with this, we are concerned the arquitecture we had in mind is not possible to be done.
We have tried this with haproxy as well, with really odd behavior from haproxy (ie: sending error messages attached to the request being LB'd).
As the title summarizes, after the long description above, the question is: can someone confirms what we are trying to achieve can really be done with nginx? If not, what could be used?

Why is this Phabricator Herald rule not applied?

I am trying to create a Herald rule to block commits with empty commit messages. The rule is a global rule that applies to Commit Hook: Commit Content. Unfortunately, I have been unable to get the rule to trigger with the Test Console.
This led me to try altering the conditions in various ways, ultimately trying this:
When any of these conditions are met:
Always
Take these actions every time this rule matches:
Block push with message: No empty commit messages allowed.
It seems like this should cause Phabricator to apply this rule to any commit, but according to the Rule Transcript even this rule is not applied.
Should it be? If so, what might cause this behavior?
Through discussion in the #phabricator channel on irc.freenode.net, I learned that testing pre-commit Herald rules with the Test Console is not currently supported by Phabricator. The developer that helped me created a task for this issue, which can be found here https://secure.phabricator.com/T9719.
With the Test Console not an option, I am not entirely sure how to test Herald rules of this type without allowing unacceptable commits into the repository. I had read this https://secure.phabricator.com/book/phabricator/article/diffusion_hooks/ page, which explains how to install custom hooks. Interestingly, it states that "These hooks will run only after all the Herald rules have passed and Phabricator is otherwise ready to accept the commit or push." I asked whether it would be possible to create a hook of this type to deny all commits and then test the Herald rules by actually trying to make commits as normal. It was indicated that this might work. I haven't had a chance to test this yet, so I will post an update once I know more.

Resources