We installed Phabricator as a POC. We have herald rules so that each commit requires an audit. We don't want to use differential, since this blocks commits.
The problems are:
If I had 2 audits for the same file from 2 different revisions, how can I make a link to the last audit/group for all of them?
How can I look at a diff range between multiple revision of the same file?
If we are using Phabricator incorrectly, please let us know what the best practices are?
Thanks
Create a Global Herald Rule on all Commits, have it match all commits that have no Differential Revision, and the action to Audit Commit by [Whatever Project Team] you want.
Related
Two sql scripts (update/rollback) are created for each version in current projects. We would like to migrate to DACPAC solution.
Each DACPAC project creates 1 file dacpac at the en so for each version, I create 2 project (1 for update and 1 for rollback). The schema changes will be in dacpac itself while pre-script and post-script are for data migration.
To add a new version, I copy the current update project into new update project and new rollback project. Then modify from there.
Any thoughts please?
I guess this comes down to whether you need to actually do all this work, the way I work with SSDT is to define what I want the current version to look like in the schema + code and any upgrade scripts I need go into the post-deploy files as re-runnable scripts (idempotent).
If a db is on version 1 or 100 they still get the same post-deploy script but either the script checks for the data or a flag that is stored in the database to say that particular script has already been run - that should be pretty easy to setup but depends on your exact requirements.
To keep this manageable it is a good idea to know when particular scripts have gone to all of your environments and then remove them so you only actually have the post-deploy scripts that you are ever going to actually need.
That being said, there are sometimes specific restrictions which are normally:
Customers managing databases so you can't be sure what version they have
Regulatory (perceived or otherwise) requirements demanding upgrade/rollback scripts
Firstly find out how set in stone your requirements are, the goal of SSDT is to not have to worry about upgrade / downgrade scripts (certainly for the schema) - the questions I would ask are:
is it enough to take a backup or snapshot before the deploy?
can you leave off downgrade scripts and write them should you ever need to?
how often are the rollback scripts ever used (also knowing the first two can help here)
If you have a good suite of tests, an automated deployment pipeline (even if it has a manual DBA step in at some point) then these issues become less important and over time as everyone learns to trust the process can become significantly faster and easier to deploy changes.
What are your restrictions?
Ed
If you find that you're investing a fair amount of effort putting logic into a post-deployment script, the chances are that a migrations-based approach (and not the state-based approach) is more suited for you.
Examples are DBUp (open source), ReadyRoll (this is commercial and the one we develop here at Redgate - has additional features such as auto-generation of scripts integration with VS etc).
Migrations-based tools manage the versions (including the table Ed is referring to) on your behalf.
We were checking the feasibility of using Phabricator in our software development activities.
We are currently using JIRA and is looking for a lighter replacement. We feel JIRA as a generic tool that doesn't only focus on software development and Phabricator for us looks lighter and well integrated.
One feature we couldn't find is Work logging. Currently we are using jira work logging feature for extracting data for project management reporting.
So basically my query is
Is work logging feature available in Phabricator ?
Would it be possible to extend the Phabricator for this purpose ?
Technically, yes, there is a work logging feature called Phrequent. It is one of the prototype applications in Phabricator (prototypes must be turned on from Config).
However, it has a lot of missing features. While individuals can start and stop work time on tasks, they cannot edit or delete time entries, and the reporting features are less than ideal (you can only view by person, not by time range or task). More features are planned, though they appear to be low priority right now for the core development team.
Currently I understand flyway has commandline support. We, as developer (or devops :)) we deploy automatically with jenkins and standard tool chain we have.
Issues is when we do application release, We have to apply DB patches. We can definitely automate them; But process at our organization says "we have DBA that needs to review your sql before you apply".
We know its relatively useless, But we still need to support that. Can you suggest a way of doing these ? I mean can / does flyway have hooks that tell us list of migration sql will be applied ? and print on screen or something ,then DBA can execute the same ?
Or can we do some addon to do the same ?
Would really appreciate help here from flyway team.
This is only thing stops us from using flyway in real application.
This answer is about 6 months too late, sorry, and it's a 'you're doing the wrong thing' answer.
Code review is generally accepted as being a good thing, and this applies to database scripts just as much as java/es/go/c++/cobol/whatever code. However, to be effective, the review should be done as soon as possible after the code is written. Doing a review as part of application release -- to any environment -- is way too late. By then the code is probably cold, further, it's way past the point where development and changes are happening. Basically, it's too late to be doing reviews at this stage.
Rather than do this you need to engage your DBAs to do the review as part of the development process -- as early as possible in fact. This way they'll never be in the position of trying to run an unreviewed script, and you'll be able to fully automate using flyway (or any other tool you like).
Hope this helps.
This is a repeat of a question in the (restricted) Tridion Forum about the inability to delete a structure group. However, since it didn't get a proper answer or solution by the person reporting the question I am re-asking it here.
I am stuck with a structure group, which I can't delete either. It is not localized, only blueprinted to one other Publication and does not have any pages in it. The contents have been migrated from a presentation environment, perhaps an old target stuck somewhere?
Deleting it directly in the database is not an option. Any other solutions?
It is possible you have multimedia components rendered using that Structure Group? This may cause some kind of lock. You might try changing the Set Publish States PowerTool for 2009 to set everything to UnPublished in that pub and see if it helps.
Brute force: Start a DB trace, try and delete the Structure Group via the GUI, look for the items it is finding when checking for dependencies.
Or
Open a support ticket, send them the DB, let them take a look at it.
We came across similar issues at a customer. Our initial analysis was to examine the stored procedures that do the delete, and to see what constraints were enforced. On examining the data, we could see records that would not show up in the user interface, but which would prevent the deletion.
We raised a ticket with SDL Tridion customer support, and were able to agree with them which records should be modified in the database.
So that's the take-away from this: you aren't allowed to modify the database, but SDL Tridion customer support can sanction it, but only once they have checked that the changes are correct and necessary. Obviously, if you were to attempt to do such things without the co-operation of support, you'd end up with an unsupported system.
I have very little PeopleSoft experience but have been put in a position to support an install. This question could straddles serverfault but is certainly developer oriented.
On a daily basis, we have a PeopleSoft "developer" who writes scripts to fix records/journal entries/approval status etc. To me this screams "bad install" and botched customizations. Is this normal? Is it best practice to have an employee having to write scripts daily just to keep things running?
Note: there is no fraud happening here, he has the full approval of the accounting department when doing this.
It is unlikely that it is the installation. Likely causes:
Bad customization
Missing patches
Bugs in the delivered code
If you only have one admin, though, and you have only one developer, I would be shocked to hear that there is much in the way of custom code.
Back to the question: It is not normal to need to do SQL updates regularly to fix data. Yes, it happens, but not too often. It is also possible that the end users could fix it from the application, but do not for some reason.
Ad-hoc SQL updates can be dangerous and the SQL may change on every request. It is difficult to fully test ad-hoc scripts due to the turnaround they typically require.
I assume these "fixes" are in fact making changes not implemented by the system.
It would be more sensible to either:
Build a custom page to "fix" the entries (or less sensible: modify the delivered pages).
Build and thoroughly test a paramater-driven App Engine to perform the most commonly made changes. It could potentially be run as part of the batch stream.
Watch out on your next upgrade: application tables have had a lot of changes in recent releases.