Phabricator - review diffs using Spaces for security? - phabricator

I have a Phabricator install, which is connected to my Git repository. I am trying to implement a Security queue, so that tasks can be restricted to a limited number of core developers to avoid disclosing vulnerabilities. So far, that works great.
However, I think that any patches attached to Security bugs should also be hidden, at least until they are committed. Differential objects don't seem to be able to be part of a Space, and while they can have visibility rules, I can't use Herald to edit those visibility rules.
How can I restrict the visibility of Differential objects as they are being uploaded?
I don't necessarily need it to be with Spaces, I can do it with the "Visible To" option and my Security project, but it is not acceptable for users who are not part of the Security space/project to receive Herald notifications, so editing the Diff after it is added is too late.

Related

How hidden are you to a network admin

So to be more specific, I am aware that an admin can see your browser history and stuff but can they see what you do in cmd and if you run cmd in general?
This question is rather vague. Do you have a specific question here? As a general rule, an administrator account exists to keep tabs on all actions performed on the host in question. The administrator would have access to whatever histories, file systems and commands you may have executed, added, deleted, etc.. In some cases, the logging level may be turned down, but I would never assume that your actions are invisible to an administrator account.

Too easy to delete whole database

Is there a way to protect the database from deletion? I mean it's very easy to click on the "x" next to the root node. This would destroy the whole app and cause an enourmous mess to deal with.
How to deal with this fragility?
EDIT:
Let's assume I have two firebase accounts: one for testing and one for the launched app. I regularly log in and out to use the other one. On the test account I delete whole nodes on a regular basis. An activated password protection would avoid a very expensive confusion of the two accounts.
If you give a user edit access to the Firebase Console of your project, the user is assumed to be an administrator of the database. This means they can perform any write operation to the database they want and are not tied to your security rules.
As a developer you probably often use this fact to make changes to your data structure while developing the app. For application administrators, you should probably create a custom administrative dashboard, where they can only perform the actions that your code allows.
There is no way to remove specific permissions, such as limiting the amount of data they can remove. It could be a useful feature request, so I suggest posting it here. But at the moment: if you don't trust users to be careful enough with your data, you should not give them access to the console.
As Travis said: setting up backups may be a good way to counter some of this anxiety.

Let selected users close their own Audits in Phabricator

I'm using Audit in Phabricator. By default, users cannot close Audits that were created for their own commits. By setting audit.can-author-close-audit to true it becomes possible for users to close Audits for their own commits.
However, I would like only some people to have this privilege. Is this possible?
I don't know of anyway to allow for this. Audit support in Phabricator is second class as Differential is the recommended way to do code review. The best way to somewhat enforce this is add a certain user or project to commits that trigger audits through herald so that you or other users will be notified if certain users close their own audits. However this may bring about a somewhat uncomfortable social situation when these users figure out what is happening.

How to block git push before a review being accepted in phabricator?

I have been trying Phabricator platform for 2 days in that to use it in our team. Everything seem fairly great except one I don't know how to make it.
We want to add Code Review process to our work flow forcefully. So I config Differential. Then as a developer I can use Arcanist command line to send a diff to the web UI requiring someone else to review. Someone can also accept or deny it after reviewing. That is OK.
But me who should be waiting other's review acceptance before pushing my changes to the hosted repo, can do that with git push (not arc land or arc amend) without the acceptance. How can I prevent this?
In the upstream we have Herald check for the presence of a Differential Commit, which you can send an embarrassing email, trigger an Audit, or whatever. Because we're a small team, we trigger an Audit (presuming those instances are generally emergencies and can be reviewed later). If the repository is also hosted by Phabricator, you can set a Policy on the repository to who has access to push to it. We use this to gate contributors. Frequent contributors can land reviewed code freely. New contributors have to have code landed by the upstream manually.
As far as I know, you can't. A user either has push rights or he has none. One way would be to trust the committer not to push his things until the review was accepted. On the other hand you could drop the right to push and let the reviewer or an administrator land the patch.
One different (maybe little complicated) way might be to create some herald rules to prevent the push. But I am not sure if herald is flexible and powerful enough for that kind of job.

Best libraries/practices to prevent OWASP Top 10 Vulnerabilities

I'm looking for the best reusable libraries and inbuilt features in ASP.Net to prevent the OWASP top 10 security vulnerabilities like injection, XSS, CSRF etc., and also easy to use tools for detecting these vulnerabilities for use by the testing team.
When do you think is the best time to start incorporating the security coding into the application during the development life cycle?
My two cents:
Never ever trust user input. This include forms, cookies, parameters, requests...
Keep your libraries updated. Everyday security flaws arise among us. Patches are released, but they are worthless if you don't apply them / upgrade your libraries.
Be restrictive and paranoid. If you need the user to write his name, be restrictive and let him only use [A-z] characters and so on. Strong constraints will annoy the average user, but it will make your system more secure.
Never log critical data. This means that you should not log things such as what password a user used (obvious) but also you should not be tempted to log what password did a user typed when he failed to log in into the system (because he may had a typo easy to guess). You can extend this example to all critical data. Remember, if it's not there, you don't have to worry about someone trying to get it.
And extracted from wikipedia's CSFR article:
Requiring authentication in GET and POST parameters, not only cookies;
Checking the HTTP Referer header;
Ensuring there's no crossdomain.xml file granting unintended access to
Flash movies[14]
Limiting the lifetime of authentication cookies
When processing a POST, disregard URL parameters if you know they should
come from a form
Requiring a secret, user-specific token in all form submissions and
side-effect URLs prevents CSRF; the
attacker's site can't put the right
token in its submissions
My experience is that just giving the developers a toolbox and hoping for the best doesn't actually work all that well. Security is an aspect of code quality. Security issues are bugs. Like all bugs, even developers that know better will end up writing them anyway. The only way to solve that is to have a process in place to catch the bugs.
Think about what sort of security process you need. Automated testing only? Code review? Manual black-box testing? Design document review? How will you classify security issues in your bug tracking system? How will you determine priority to fix security bugs? What sort of guarantees will you be able to give to customers?
Something that may help you get started is the OWASP ASVS verification standard, which helps you verify that your security verification process actually works: http://code.google.com/p/owasp-asvs/wiki/ASVS
First best practice: Be aware of the vulnerabilities while coding. If you code think about what your are doing.

Resources